Graduation Year

2021

Document Type

Dissertation

Degree

Ph.D.

Degree Name

Doctor of Philosophy (Ph.D.)

Degree Granting Department

Computer Science and Engineering

Major Professor

Dmitry Goldgof, Ph.D.

Committee Member

Sudeep Sarkar, Ph.D.

Committee Member

Yu Sun, Ph.D.

Committee Member

Ashwin Parthasarathy, Ph.D.

Committee Member

Matthew Peterson, Ph.D.

Committee Member

Linda Cowan, Ph.D.

Keywords

Ensemble Learning, K-nearest-neighbor, Skin Tone, Transfer Learning, U-Net

Abstract

Accurate pressure ulcer (PrU) measurement is critical in assessing the effectiveness of PrU treatment. The traditional measurement process is manual, subjective, and requires frequent contact with the wound. The manual measurement relies on human observation which makes the measurement inconsistent, and the frequent contact with the wound increases risk of contamination or infection. The purpose of this research was to develop an automatic Pressure Ulcer Monitoring System (PrUMS) using a depth camera to provide automated, non-contact wound measurement. In this dissertation, 1) a wound segmentation with traditional machine learning method, which combines the color classification using K-Nearest Neighbors and the surface gradients, for a smaller dataset and 2) a segmentation algorithm using multiple convolutional neural networks (CNNs) for a larger dataset was developed to segment the wound region for measurement. A semi-automatic option allowed the users to correct the segmentation of the wound region by selecting another segmentation or to reject the segmentations. PrUMS with the proposed wound segmentation algorithms was tested on a dataset of 70 PrUs from 54 patients with spinal cord injury. Data was collected via a hand-held 3D scanner connected to a tablet, and via manual measurements by clinically trained wound care nurses. Measurements from PrUMS were compared with the manual measurements from two clinically trained wound care nurses (ground truth measurement) for each wound. The measurement errors of using traditional machine learning approach wound segmentation were 11.05 mm (length), 9.73 mm (width), and 7.52 mm (depth) for the automatic method and 5.46 mm (length), 5.60 mm (width), and 6.41 (depth) for the semi-automatic method. With deep learning approach wound segmentation, the measurement errors were 9.72 mm (length), 5.89 mm (width), and 5.79 mm (depth) for the automatic method and 4.72 mm (length), 4.34 mm (width), and 5.71 mm (depth) for semi-automatic method. With multiple measurement on the PrU, the measurement errors and the missing rate had a slight improvement. The differences between length and width measurements from PrUMS and the manual measurement by nurses were not statistically significant (p>0.05). The depth measurement was statistically different to the manual measurement due to limitations of the depth camera causing missing depth measurement for wounds with small area. Beside the limitation, PrUMS device provides objective, non-contact wound measurement and was demonstrated to be usable in clinical wound care practice. The contributions of this dissertation are listed below: 1) This is the first work to discuss having depth channel in wound segmentation using CNNs; 2) Training the network with different image channels, having publicly available dataset, and different learning approaches, such as ensemble learning and transfer learning, were discussed to improve the classifier in this dissertation; 3) The impacts of having images of manikin in the training set and training classifiers in groups by skin tone were discussed; 4) A traditional wound segmentation method was proposed for a smaller dataset.

Share

COinS