G06T 7/0012, G16H 30/20, G06F 16/55, G06F 21/566
The present disclosure describes a multi-initialization ensemble-based defense strategy against an adversarial attack. In one embodiment, an exemplary method includes training a plurality of conventional neural networks (CNNs) with a training set of images, wherein the images include original images and images modified by an adversarial attack; after training of the plurality of conventional neural networks, providing an input image to the plurality of conventional neural networks, wherein the input image has been modified by an adversarial attack; receiving a probability output for the input image from each of the plurality of conventional neural networks; producing an ensemble probability output for the input image by combining the probability outputs from each of the plurality of conventional neural networks; and labeling the input image as belonging to one of the one or more categories based on the ensemble probability output.
Paul, Rahul; Goldgof, Dmitry; Hall, Lawrence; Schabath, Matthew; and Gillies, Robert, "Mitigating adversarial attacks on medical imaging understanding systems" (2023). USF Patents. 1353.
UNIVERSITY OF SOUTH FLORIDA