Times are displayed in (UTC-07:00) Pacific Time (US & Canada)Change
2/5/2025 | 11:00 AM - 12:20 PM | Regency B
Can adversarial modifications undermine super-resolution algorithms?
Author(s)
Suhas Sreehari | Oak Ridge National Laboratory
Langalibalele Lunga | Farragut High School / Oak Ridge National Lab
Abstract
Super-resolution techniques, like SRGAN and EDSR, excel at enhancing low-resolution images by restoring fine textures and details. However, these models remain vulnerable to adversarial attacks, where subtle perturbations in the input significantly degrade the upscaled output. While much research has focused on adversarial attacks for classification tasks, their impact on super-resolution models is less explored. This work aims to exploit this gap by developing an adversarial framework that generates low-resolution images specifically designed to challenge super-resolution models. Using GANs, FGSM, and GradCAM, we introduce imperceptible noise in key image areas that disrupts the upscaling process, focusing on critical features such as edges and textures. Preliminary results from our earlier work show a significant drop in classification accuracy when subjected to adversarial attacks. We are now extending these concepts to super-resolution, highlighting the need for more robust and resilient models in image reconstruction to defend against such vulnerabilities.
Can adversarial modifications undermine super-resolution algorithms?
Description
Date and Location: 2/5/2025 | 12:00 PM - 12:20 PM | Regency B
Primary Session Chair:
Emma Reid | Oak Ridge National Laboratory
Session Co-Chair: