Graduation Year
2024
Document Type
Dissertation
Degree
Ph.D.
Degree Name
Doctor of Philosophy (Ph.D.)
Degree Granting Department
Medical Engineering
Major Professor
Issam El Naqa, Ph.D.
Committee Member
Summer Decker, Ph.D.
Committee Member
Jonathan Ford, Ph.D.
Committee Member
Lawrence Hall, Ph.D.
Committee Member
George Spirou, Ph.D.
Keywords
Artificial Intelligence, Deep Learning, Machine Learning, Time-Dependent
Abstract
While the fields of machine learning and medicine are deeply rooted in axiomatic scientific principles, there is an element of art that makes the practice imperfect, yet innately human. As the two fields have seen the greatest overlap in their collective history, there remains a chasm between them in terms of practical translation for the patients who desire and deserve personalized medicine. As presented in this dissertation, I and my collaborators have contributed to the groundwork for future exploration of predicting disease progression by identifying signals within sequential medical imaging to provide a temporospatial relationship upon which we can make statistically significant predictions.
The primary aim of this dissertation is to evaluate state of the art deep learning architectures and develop new time-dependent algorithms specific to prediction of disease progression based on sequential medical imaging. A secondary aim is to improve the quality of medical images by reducing artifacts and optimizing signal-to-noise ratio (SNR) to improve model performance. What follows is a series of four peer-reviewed works exploring each of these aims with clinical application and testing of state-of-the-art architectures in disease states such as invasive ductal cancer (IDC) and multiple sclerosis (MS) disability using Magnetic Resonance Imaging (MRI) across time.
The results of these works contributed first to a novel, lightweight denoising algorithm which has become a pending U.S. patent, demonstrating denoising of MRI of the spine with motion artifact and blurring using Orthogonal Matching Pursuit (OMP). The peak signal to noise ratio (PSNR) and structural similarity index (SSIM) had values of 37.6 and 0.99, respectively. Wavelet denoising performed better in terms of PSNR but had a much wider range of variation between studies with a SSIM of 0.94. This tool was subsequently used as needed in subsequent studies. Secondly, the utility of time-dependent deep learning was demonstrated in presurgical prediction of Ductal Carcinoma in Situ (DCIS) upgraded to Invasive Ductal Carcinoma (IDC) on Dynamic Contrast-Enhanced MRI (DCE-MRI) of breast cancer patients. Convolutional Neural Networks with Long Short-Term Memory layers (CNN-LSTM) outperformed CNN alone with all four phases of contrast contributing to the greatest performance difference (ROC-AUC of 0.73 versus 0.62, p-value of 0.008), demonstrating the utility of sequential information in a short time frame. Thirdly, to further test the ability of these models to perform over longer periods of time, the CNN-LSTM was used to predict disability in MS patients and compared to the novel Video Vision Transformer (ViViT). The ViViT provided statistically significant performance over the best performing CNN-LSTM based on the Visual Geometry Group (VGG) architecture with pretraining, with an ROC-AUC of 0.84 versus 0.78, respectively, p-value of 0.039. As these studies were 3 to 12 years apart from one another, a longer time dependency was still feasible for making translatable predictions. Finally, we tested state-of-the-art Variational Quantum Classifier (VQC) circuits in the form of Quantum CNN-LSTM (QCNN-LSTM) architectures. Three VQCs (Matrix Product State or MPS, reverse Multistate Entanglement Renormalization Ansatz or MERA, and Tree Tensor Network or TTN) served as convolutional layers for each frame of input in the MS cohort. The MPS-LSTM, Reverse MERA-LSTM, and TTN-LSTM had holdout testing ROC-AUC of 0.70, 0.77, and 0.81, respectively (p-value of 0.915). VGG16-LSTM and ViViT performed similarly with ROC-AUC of 0.73 and 0.77, respectively (p-value of 0.631). Overall variance and mean were not statistically significant (p-value of 0.713), however, time to train was significantly faster for the QCNN-LSTMs (39.4 vs. 224.3 and 217.5 seconds per fold, respectively, p-value <0.001).
These early findings require further investigation on larger independent cohorts to increase the number of positive classes seen by the models. The clinical implications of these methods of prediction are incredibly important beyond clinical medicine as they could provide insight into patients affected by MS and help with their life-planning. For them, we continue to push the envelope to provide them answers earlier in their disease course.
Scholar Commons Citation
Mayfield, John D., "Temporospatial Deep Learning Strategies for Prediction of Disease Progression in Radiology" (2024). USF Tampa Graduate Theses and Dissertations.
https://digitalcommons.usf.edu/etd/10219