Graduation Year
2023
Document Type
Thesis
Degree
M.S.
Degree Name
Master of Science (M.S.)
Degree Granting Department
Computer Science and Engineering
Major Professor
Hao Zheng, Ph.D.
Committee Member
Srinivas Katkoori, Ph.D.
Committee Member
Tempestt Neal, Ph.D.
Keywords
Adversarial Examples, Adversarial Training, Deep Learning, Generative Adversarial Networks, Generative Adversarial Trainer
Abstract
Deep learning has become more widespread as advances in the field continue. As aresult, making sure deep learning is safe has become a priority. A seemingly normal image with intentional pixel changes can cause a well-trained model to misclassify the image with high confidence. Those kinds of images are called adversarial attacks. Adversarial training has been developed to defend against adversarial attacks. This thesis evaluates different adversarial training methods against a variety of adversarial attacks. The key metrics for evaluation are classification accuracy and training time. This thesis also experiments with an improvement on an existing adversarial training method, the generative adversarial trainer (GAT). GAT is a generative adversarial network (GAN) focused on improving the robustness of deep neural networks (DNN). To improve upon GAT, this thesis proposes an improved architecture that adds an additional adversarial model, a discriminator, to the GAT architecture to increase the robustness of a target DNN model. Through experiments, this thesis concludes that fast sign gradient method (FGSM) adversarial training is not great at improving robustness on a variety of attacks. Projected gradient descent (PGD) adversarial training is effective at improving DNN robustness but is time-consuming. Efficient adversarial training improves training significantly but fails to achieve the same robustness increase as PGD adversarial training. The GAN-based robustness improvement methods can rival or outperform PGD adversarial training on different attack methods except on GAN-based adversarial attacks. This thesis also finds that the proposed improvement on GAT is able to increase robustness against adversarial attack methods that are not GAN-based.
Scholar Commons Citation
Griffin, Laureano, "Evaluating Methods for Improving DNN Robustness Against Adversarial Attacks" (2023). USF Tampa Graduate Theses and Dissertations.
https://digitalcommons.usf.edu/etd/10043