Sunnie Kim will present her General Exam "Evaluating the Human Interpretability of Visual Explanations" on Thursday, January 20, 2022 at 10am via Zoom.

Zoom link: http://princeton.zoom.us/my/sunniesuhyoung

Committee members:
Olga Russakovsky (adviser)
Andrés Monroy-Hernández
Ruth Fong

Abstract:
As machine learning is increasingly applied to high-impact, high-risk domains, there have been a number of new methods aimed at making machine learning models more human interpretable. Despite the recent growth of interpretability work, there is a lack of systematic evaluation of proposed techniques. In this exam, I will present my recent work, HIVE: Evaluating the Human Interpretability of Visual Explanations, where I propose HIVE (Human Interpretability of Visual Explanations), a novel human evaluation framework for interpretability methods in computer vision. While human studies should be the gold standard in properly evaluating how interpretable a method is to human users, they are often avoided due to challenges associated with cost, study design, and cross-method comparison. I will discuss how HIVE mitigates these issues and enables the evaluation of diverse visual explanations. To demonstrate the extensibility and applicability of HIVE, I conducted IRB-approved studies of four existing interpretability works: GradCAM, BagNet, ProtoPNet, and ProtoTree. Results suggest that explanations (regardless of if they are actually correct) engender human trust, yet are not distinct enough for users to distinguish between correct and incorrect predictions. I will conclude the talk with a discussion of key insights and propositions for future research.

Reading List:
https://docs.google.com/document/d/1WL1yhvTS1yFwKd3E7C55AIMjCAdmBXsAsNVWHb9Tcg8/edit?usp=sharing

Everyone is invited to attend the talk, and those faculty wishing to remain for the oral exam following are welcome to do so.