William Yang will present his General Exam "Rethinking Out-of-Distribution Detection" on Tuesday, January 24, 2023 at 12:00 PM in CS 401.

Committee Members: Olga Russakovsky (advisor), Ryan Adams, Tom Griffiths

Abstract:

Out-of-Distribution (OOD) detection is the task of identifying inputs on which a given model should not be trusted to make a prediction. OOD detection is a well-studied problem, commonly formulated for image classifiers as detecting test images from classes that did not appear in the model's training data (“semantic distribution shift''). However, this formulation is limited, as it ignores test images which may come from the familiar classes but may look different from the training examples (“covariate distribution shift''). Incorporating covariate shift into OOD detection poses a challenge because detecting all such examples may undervalue the model's robustness and ability to generalize beyond its training distribution.

In this work we propose a new formulation of OOD detection, approaching the problem with respect to the learned model distribution rather than the training data distribution. This formulation seamlessly incorporates both semantic and covariate shift, and synergizes the perspective of OOD detection with model generalization. Our empirical analysis reveals a number of interesting findings, the most striking being that the simplest OOD detection baseline, Maximum Softmax Probability (MSP), appears to outperform all prior state-of-the-art OOD detection methods under the new formulation. This suggests that the time is ripe to rethink the task of OOD detection, and to adopt the more generalized benchmark which encompasses a broader range of real-world situations.

Reading List:

https://docs.google.com/document/d/122o_aUoyzUEYjNESxGoog4VhonZpLaPmw1HffB-FQDE/edit?usp=sharing

Everyone is invited to attend the talk, and those faculty wishing to remain for the oral exam following are welcome to do so.