Yun Cheng will present her General Exam "Generalizing from SIMPLE to HARD Visual Reasoning: Can We Mitigate Modality Imbalance in VLMs?" on Wednesday, January 22, 2025 at 10:00 AM in CS 402 and via zoom.
Zoom link: https://princeton.zoom.us/j/91346752488
Committee Members: Sanjeev Arora (advisor), Danqi Chen, Benjamin Eysenbach
Abstract:
While Vision Language Models (VLMs) are impressive in tasks such as visual question answering (VQA) and image captioning, their ability to apply multi-step reasoning to images has lagged, giving rise to perceptions of modality imbalance or brittleness.
Towards systematic study of such issues, we introduce a synthetic framework for assessing the ability of VLMs to perform algorithmic visual reasoning (AVR), comprising three tasks: Table Readout, Grid Navigation, and Visual Analogy. Each has two levels of difficulty, SIMPLE and HARD, and even the SIMPLE versions are difficult for frontier VLMs. We seek strategies for training on the SIMPLE version of tasks that improve performance on the corresponding HARD task, i.e., S2H generalization. This synthetic framework, where each task also has a text-only version, allows a quantification of the modality imbalance and how it is impacted by training strategy. Ablations highlight the importance of explicit image-to-text conversion in promoting S2H generalization when using auto-regressive training. We also report results of mechanistic study of this phenomenon, including a measure of gradient alignment that seems to identify training strategies that promote better S2H generalization.
Reading List:
https://docs.google.com/document/d/1DbFfQZyCJb3HSYLvhy2SZ9voh_IN_6JZXcrFoiyO30c/edit?usp=sharing
Everyone is invited to attend the talk, and those faculty wishing to remain for the oral exam following are welcome to do so.