Yuxuan Zhang's MSE thesis talk "Defending Against Adversarial Attacks with Camera Image Pipelines" will take place December 6, 2022 at 5pm in CS 402. 

The members of his committee are Felix Heide (adviser) and Olga Russakovsky (reader).

All are welcome to attend.

Abstract:
Existing neural networks for computer vision tasks are vulnerable to adversarial attacks: adding imperceptible perturbations to the input images can fool these models into making a false prediction on an image that was correctly predicted without the perturbation. Various defense methods have proposed image-to-image mapping methods, either including these perturbations in the training process or removing them in a preprocessing step. In doing so, existing methods often ignore that the natural RGB images in today’s datasets are not captured but, in fact, recovered from RAW color filter array captures that are subject to various degradations in the capture. In this work, we exploit this RAW data distribution as an empirical prior for adversarial defense. Specifically, we propose a model-agnostic adversarial defensive method, which maps the input RGB images to Bayer RAW space and back to output RGB using a learned camera image signal processing (ISP) pipeline to eliminate potential adversarial patterns. The proposed method acts as an off-the-shelf preprocessing module and, unlike model-specific adversarial training methods, does not require adversarial images to train. As a result, the method generalizes to unseen tasks without additional retraining. Experiments on large-scale datasets, e.g., ImageNet, COCO, for different vision tasks, e.g., classification, semantic segmentation, object detection, validate that the method significantly outperforms existing methods across task domains