Allison Chen will present her General Exam "Understanding Visual Processing In Vision Language Models" on Thursday, May 2, 2024 at 1:30 PM in CS 302 and via zoom.

 

 

Committee Members: Olga Russakovsky (advisor), Tom Griffiths, Szymon Rusinkiewicz

 

Abstract:

Does language help make sense of the visual world? How important is it to actually see the world rather than having it described with words? These basic questions about the nature of intelligence have been difficult to answer because we only had one example of an intelligent system–humans–and limited access to cases that isolated language or vision.

 

However, the development of sophisticated Vision-Language Models (VLMs) by artificial intelligence researchers offers us new opportunities to explore the contributions that language and vision make to learning about the world. Given the unique potentials offered by language, we investigate how the VLM visual understanding process might utilize language. We hypothesize a computational process for image understanding in VLMs that involves semantic extraction of images and test this hypothesis by ablating components from the cognitive architecture of VLMs. In doing so, we identify the contributions of each component in the architecture to VLM visual understanding. We find that if VLMs process images via semantic extraction, they likely require three key components: prior knowledge, non-trivial reasoning abilities, and semantically meaningful examples to refine internal representations.

 

Reading List:

 

Everyone is invited to attend the talk, and those faculty wishing to remain for the oral exam following are welcome to do so.