Furman University Scholar Exchange - South Carolina Junior Academy of Science: Evaluating Corruption Defenses for Model Robustness
 

Evaluating Corruption Defenses for Model Robustness

School Name

South Carolina Governor's School for Science and Mathematics

Grade Level

12th Grade

Presentation Topic

Computer Science

Presentation Type

Mentored

Abstract

Within the last few years, deep learning models have seen increased usage in various computer vision domains and have attained high accuracy rates for several visual identification tasks. However, when faced with real data containing visible anomalies, their accuracy can be reduced significantly. In addition, adversarial examples, inputs with deliberate changes that induce misclassification, have further hindered their performance. The training methods used to minimize the effects of adversarial examples have also been found to negatively impact a model's robustness on clean data as well as common noise and distortions, which may be an undesired trade-off. Adversarial robustness is a model's ability to resist intentional deceptive inputs, while corruption robustness is its resilience to everyday noise and distortions. In our work, we adopt a technique that improves the neural network's ability to generalize by training on augmented images of the dataset and evaluate its efficacy in the face of adversarial examples with several empirical metrics, comparing its performance with traditional adversarial training techniques.

Location

PENNY 216

Start Date

4-5-2025 9:15 AM

Presentation Format

Oral Only

Group Project

Yes

COinS
 
Apr 5th, 9:15 AM

Evaluating Corruption Defenses for Model Robustness

PENNY 216

Within the last few years, deep learning models have seen increased usage in various computer vision domains and have attained high accuracy rates for several visual identification tasks. However, when faced with real data containing visible anomalies, their accuracy can be reduced significantly. In addition, adversarial examples, inputs with deliberate changes that induce misclassification, have further hindered their performance. The training methods used to minimize the effects of adversarial examples have also been found to negatively impact a model's robustness on clean data as well as common noise and distortions, which may be an undesired trade-off. Adversarial robustness is a model's ability to resist intentional deceptive inputs, while corruption robustness is its resilience to everyday noise and distortions. In our work, we adopt a technique that improves the neural network's ability to generalize by training on augmented images of the dataset and evaluate its efficacy in the face of adversarial examples with several empirical metrics, comparing its performance with traditional adversarial training techniques.