Graduation Year

2023

Document Type

Dissertation

Degree

Ph.D.

Degree Name

Doctor of Philosophy (Ph.D.)

Degree Granting Department

Computer Science and Engineering

Major Professor

Robert Karam, Ph.D.

Committee Member

Mehran Mozaffari Kermani, Ph.D.

Committee Member

Srinivas Katkoori, Ph.D.

Committee Member

Yasin Yilmaz, Ph.D.

Committee Member

Jean-cois Biasse, Ph.D.

Keywords

Hardware Cybersecurity, Adversarial Machine Learning, Reconfigurable Computing

Abstract

The fields of artificial intelligence (AI) and machine learning (ML) have been popular tools for data analysis at the edge, particularly through complex deep and convolutional neural networks (DNNs/CNNs), which can learn to parameterize a function given a labeled dataset. Indeed, these technologies have enabled significant progress across a wide range of fields and are becoming ubiquitous. However, training the best model for an application and subsequently using them to evaluate data in real-time requires an immense amount of computational power. Typically, ``smart" sensors at the edge rely on the cloud to accelerate this computation due to power and compute constraints. This has motivated research efforts into low-power platforms for hardware acceleration of AI and ML workloads. One popular platform is field-programmable gate arrays (FPGAs), due to their low-power and in-field reconfigurability. As the use of these specialized hardware platforms becomes more prevalent, concerns around the security of these systems have intensified. This has led to a significant body of research in adversarial machine learning, aimed at securing both the software and hardware components of ML applications. This work focuses on hardware security in the context of FPGA-based ML systems, with a particular emphasis on securing the FPGA bitstream from IP theft. We explore the unique security risks associated with deploying ML applications on FPGAs and present novel methods for securing these systems against various cyberattacks, including IP theft. To ensure the overall security and integrity of FPGA-based ML systems, this work addresses security concerns at every level of the hardware stack, from the hardware abstraction layer up to the ML algorithm itself.

Share

COinS