Doctor of Philosophy (Ph.D.)
Degree Granting Department
J. Morris Chang, Ph.D.
Nasir Ghani, Ph.D.
Xinming Ou, Ph.D.
Ismail Uysal, Ph.D.
Lu Lu, Ph.D.
Black-box attack, Cloud computing, Image denoising, Internet of things, Sparse coding
Applications of deep learning models and convolutional neural networks have been rapidly increased. Although state-of-the-art CNNs provide high accuracy in many applications, recent investigations show that such networks are highly vulnerable to adversarial attacks. The black-box adversarial attack is one type of attack that the attacker does not have any knowledge about the model or the training dataset, but it has some input data set and theirlabels.
In this chapter, we propose a novel approach to generate a black-box attack in a sparse domain, whereas the most critical information of an image can be observed. Our investigation shows that large sparse (LaS) components play a crucial role in the performance of image classifiers. Under this presumption, to generate an adversarial example, we transfer an image into a sparse domain and add noise to the LaS components. We propose a comprehensive evaluation and analysis to support our idea in chapter one.
In chapter two, we propose a new preprocessing approach that can enhance the robustness of skin lesion classification. Machine learning models based on convolutional neural networks have been widely used for automatic recognition of lesion diseases with high accuracy compared to conventional machine learning methods. In this research, we proposed a new preprocessing technique to extract the skin lesion dataset’s region of interest (RoI). We compare the performance of the most state-of-the-art convolutional neural networks classifiers with two datasets that contain (1) raw and (2) RoI extracted images. Our experiment results show that training CNN models by RoI extracted dataset can improve the prediction accuracy. It significantly decreases the evaluation and training time of the classification task.
Finally, we propose a secure and robust image denoising approach. Image denoising aims to obtain the original image from its noisy measurements. While the quality of image denoising has been increasing over the years, the complexity and the required memory to implement the denoising task have also been increased accordingly. With such advancements and the unlimited computing resources available in the cloud, trends to transfer the image denoising task to the cloud have grown over the past years. However, it is still quite challenging to utilize cloud-based resources without compromising users’ data privacy while maintaining the quality of image denoising. In this chapter, we propose a novel lossless privacy-preserving image denoising approach that protects the users’ privacy and simultaneously keeps the quality of the denoising task.
Our proposed approach is suitable for computationally constrained devices such as many IoT devices. In this method, we use two random keys to permute and perturb the noisy image patches. The cloud service provider implements the denoising task on the encrypted signal. After denoising, the output signal is still encrypted, and the real user who has access to the keys would be able to decrypt the denoised image. We evaluate the security of this method against known-plaintext, brute-force, and side-channel attacks. In addition, we theoretically prove the lossless property of this method. To verify the applicability of this approach, we implemented our experiments on multiple real images, and two well-known evaluation metrics were used to compare our results with the baseline.
Scholar Commons Citation
Zanddizari, Hadi, "Improving Robustness of Deep Learning Models and Privacy-Preserving Image Denoising" (2022). USF Tampa Graduate Theses and Dissertations.