Graduation Year

2024

Document Type

Thesis

Degree

M.S.C.S.

Degree Name

MS in Computer Science (M.S.C.S.)

Degree Granting Department

Computer Science and Engineering

Major Professor

Dmitry Goldgof, Ph.D.

Committee Member

Lawrence Hall, Ph.D.

Committee Member

Shaun Canavan, Ph.D.

Committee Member

Ghada Zamzmi, Ph.D.

Keywords

Differential Privacy, Facial De-identification, Feature Extraction

Abstract

The need for sharing large-scale datasets to train deep neural network models, particularly in healthcare, raises significant data security and privacy concerns. To address these issues, methods such as data encryption or encoding are utilized. These techniques can encrypt the data and make it unreadable to humans, while still retaining its usefulness for training models.

In this study, we investigate various image encoding techniques designed to protect privacy by making images unrecognizable while still retaining their usefulness for model training. Our investigation utilized publicly available facial databases and focused on evaluating the trade-offs inherent in image encoding techniques, with a special emphasis on balancing privacy and model accuracy.

This study navigates the balance between protecting sensitive data and meeting the data demands necessary for effective model training. It sheds light on the intricate tradeoffs among different image encoding techniques and offers insights into finding an optimal balance between privacy protection and model performance.

Share

COinS