Xinran Liang will present her General Exam "Reducing Contextual Bias using Synthetic Augmentations from Generative Models" on Wednesday, May 8, 2024 at 3:00 PM in CS 302.

Xinran Liang will present her General Exam "Reducing Contextual Bias using Synthetic Augmentations from Generative Models" on Wednesday, May 8, 2024 at 3:00 PM in CS 302. Committee Members: Olga Russakovsky (advisor), Szymon Rusinkiewicz, Jia Deng Abstract: Many vision datasets are known to contain contextual bias. For example, training images of “skateboard” very usually co-occurs with “person”, while in rare cases skateboards occur exclusively. Models trained on these biased datasets tend to leverage co-occurrence between objects and their contexts to improve recognition accuracy. Consequently these models often fail to generalize to scenarios where such co-occurrence patterns are absent. Previous works addressed this problem by enforcing models to decorrelate learned features of a category from its co-occurring context. In this project, with recent advances in generative models, we explore the possibility of using synthetic samples from generative models to reduce contextual bias. We generate useful variations of training data that do not contain typical co-occurring patterns and use them as augmentations in training downstream classifiers. We evaluate our framework on object classification tasks and show that it improves downstream performance compared to standard classifiers. We hope our work could demonstrate the effectiveness of using generated samples to improve limitations in vision datasets and learn better vision models, potentially overcoming the challenge of manually collecting and annotating curated data at scale. Reading List: https://docs.google.com/document/d/1YWZjPo272HKOp9CLd6v4yK1RzXejGr4mit12lcYp... Everyone is invited to attend the talk, and those faculty wishing to remain for the oral exam following are welcome to do so.
participants (1)
-
CS Grad Department