Zheng Shi will present her FPO "Task-Specific Computational Cameras" on Friday, May 3, 2024 at 3:00 PM in CS 302.

Location: CS 302

The members of Zheng’s committee are as follows:
Examiners: Felix Heide (Adviser), Adam Finkelstein, Olga Russakovsky
Readers: Ellen Zhong, Tian-Ming Fu

A copy of his thesis is available upon request.  Please email gradinfo@cs.princeton.edu if you would like a copy of the thesis. 
 
Everyone is invited to attend her talk. 
 
Abstract follows below:
Machine vision, while fundamentally relying on images as inputs, has traditionally treated image acquisition and image processing as two separate tasks. However, traditional image acquisition systems are tuned for aesthetics, photos that please the human eye, and not computational tasks that requires beyond human vision. My research focuses on developing task-specific computational imaging systems to enable the capture of information that extends beyond the capabilities of standard RGB cameras, thereby enhancing the effectiveness of downstream machine vision applications.

This thesis begins with combining multiple imaging modalities to facilitate training on unpaired real-world datasets, addressing the scarcity of supervised training data. We introduce ZeroScatter, a single-image descattering method capable of removing adverse weather effects from RGB captures. By integrating model-based, temporal, and multi-view cues, as well as information contained in gated imager captures, we offer indirect supervision for training on real-world adverse weather captures lacking ground truth. This approach significantly enhances generalizability on unseen data, surpassing methods trained exclusively on synthetic adverse weather data.

Despite its great applicability, relying solely on conventional RGB image inputs limits available information, and requires the model to fill in gaps by generating plausible inferences based on learnt prior, such as when car window wiper obscure objects from the dash cameras. To bypass these constraints, we shift towards computational cameras, and design specialized flat optics to boost the capabilities of cameras for a range of applications.

We first propose a computational monocular camera that optically cloaks unwanted near-camera obstructions. We learn a custom diffractive optical element (DOE) that performs depth-dependent optical encoding, scattering nearby occlusions while allowing paraxial wavefronts emanating from background objects to be focused. This allows us to computationally reconstruct unobstructed images without requiring captures different camera views or hallucinations.

Lastly, we introduce a split-aperture 2-in-1 computational camera that combines application-specific optical modulation with conventional imaging into one system. This approach simplifies complex inverse problems faced by computational cameras, enhances reconstruction quality, and offers a real-time viewfinder experience; paving the way for the adoption of computational camera technology in consumer devices.