Graduation Year

2021

Document Type

Dissertation

Degree

Ph.D.

Degree Name

Doctor of Philosophy (Ph.D.)

Degree Granting Department

Mathematics and Statistics

Major Professor

Chris P. Tsokos, Ph.D.

Committee Member

Kandethody M. Ramachandran, Ph.D.

Committee Member

Lu Lu, Ph.D.

Committee Member

Yicheng Tu, Ph.D.

Keywords

Bayesian Deep Learning, Biomedical Imaging, Class Imbalance, High dimensional data, Uncertainty Estimation

Abstract

Deep Learning (DL) has achieved the state-of-the-art performance across a broad spectrum oftasks. From a statistical standpoint, deep neural networks can be construed as universal function approximators. Although statistical modeling and deep learning methods are well-established as independent areas of research, hybridization of the two paradigms via probabilistic deep networks is an emerging trend. Through development of novel analytical methods under the statistical and deep-learning framework, we address some of the major challenges encountered in the design of intelligent systems which include class imbalance learning, probability calibration, uncertainty quantification and high dimensionality. When modeling rare events, existing methodologies require re-sampling strategies or algorithmic modifications. On the contrary, we introduce a cost sensitive approach that could be promptly applied to any deep neural network architecture. Our research corroborates that the proposed approach leads to significant performance gains in highly imbalanced data and results in improved calibration. Moreover, deterministic neural nets are ignorant to the uncertainty associated with their predictions and tend to produce overconfident predictions, resulting in unreliable model predictions. Uncertainty-aware deep networks provide additional insights to model predictions and produce a more informed decision and thus, is indispensable in applications where the acceptable margin of error is significantly low. To this end, we present a Bayesian-based deep probabilistic learning approach that provides a principled framework for handling uncertainty. Furthermore, we address high dimensionality in time-to-event modeling which is common problem in computational biology such as in genomics. Our results suggest that in the presence of limited but high dimensional data, inducing sparsity through shrinkage priors under the Bayesian framework is a potent alternative to the machine learning methods. With theoretical justification and sound empirical validation on data across different domains of cyber-security and healthcare we provide validity for the proposed methods.

Share

COinS