Allison Chen will present her General Exam "Understanding Visual Processing In Vision Language Models" on Thursday, May 2, 2024 at 1:30 PM in CS 302 and via zoom.
Allison Chen will present her General Exam "Understanding Visual Processing In Vision Language Models" on Thursday, May 2, 2024 at 1:30 PM in CS 302 and via zoom. Zoom link: https://princeton.zoom.us/j/99275578452 Committee Members: Olga Russakovsky (advisor), Tom Griffiths, Szymon Rusinkiewicz Abstract: Applications in modern platforms are complex and thus hard to reason about their security. For instance, modern web applications have largely become cloud-based due to the flexibility and cost efficiency brought by the cloud. However, writing secure web applications is challenging as developers need to devise sufficient security measures to ensure intended policies are enforced while applications are cross-components and rapidly changing during development. To address this problem, we propose Faasten, which is a secure cloud architecture and implementation that enforces end-user data policies and supports general cloud applications with negligible performance overheads. Faasten uses decentralized information flow control (DIFC) and leverages Function-as-a-Service (FaaS) programming models to provide a safe, general, and self-encapsulating cloud interface for application developers. Additionally, another challenge that remains unsolved is timing channels. Faasten is unable to close timing channels in cloud platforms because such channels are a result of resource sharing which is out of control of Faasten. Timing channels are critical not only to cloud platforms but also in critical infrastructure. Existing defenses and mitigations are ad-hoc and often come with large performance penalties. With Leak Avoidant Resource Provisioners (LARPS), we argue that a large class of timing channels can be closed by carefully controlling information flow through resource provisioning decisions. In addition, we require a minimal set of hardware primitives to ensure LARP operations themselves do not impose additional timing channels. Reading List: https://docs.google.com/document/d/1-sXQFoc8srt-hFcFyCmfxvFQlbJPCJA1vCuWKNsT... Everyone is invited to attend the talk, and those faculty wishing to remain for the oral exam following are welcome to do so.
Allison Chen will present her General Exam "Understanding Visual Processing In Vision Language Models" on Thursday, May 2, 2024 at 1:30 PM in CS 302 and via zoom. Zoom link: https://princeton.zoom.us/j/99275578452 Committee Members: Olga Russakovsky (advisor), Tom Griffiths, Szymon Rusinkiewicz Abstract: Does language help make sense of the visual world? How important is it to actually see the world rather than having it described with words? These basic questions about the nature of intelligence have been difficult to answer because we only had one example of an intelligent system–humans–and limited access to cases that isolated language or vision. However, the development of sophisticated Vision-Language Models (VLMs) by artificial intelligence researchers offers us new opportunities to explore the contributions that language and vision make to learning about the world. Given the unique potentials offered by language, we investigate how the VLM visual understanding process might utilize language. We hypothesize a computational process for image understanding in VLMs that involves semantic extraction of images and test this hypothesis by ablating components from the cognitive architecture of VLMs. In doing so, we identify the contributions of each component in the architecture to VLM visual understanding. We find that if VLMs process images via semantic extraction, they likely require three key components: prior knowledge, non-trivial reasoning abilities, and semantically meaningful examples to refine internal representations. Reading List: https://docs.google.com/document/d/1-sXQFoc8srt-hFcFyCmfxvFQlbJPCJA1vCuWKNsT... Everyone is invited to attend the talk, and those faculty wishing to remain for the oral exam following are welcome to do so.
participants (2)
-
CS Grad Department
-
gradinfo@cs.princeton.edu