Malinda Huang will present her MSE talk “CONFINE: Conformal Prediction for Interpretable Neural Networks” Friday, April 19 at 10:30 AM in CS 301.

 

Advisor: Prof. Niraj Jha; Reader: Prof. Olga Troyanskaya

 

Abstract:

Deep neural networks exhibit remarkable performance, yet their black-box nature limits their utility in fields like healthcare where interpretability is crucial. Existing explainability approaches often sacrifice accuracy and lack quantifiable prediction uncertainty. In this study, we introduce Conformal Prediction for Interpretable Neural Networks (CONFINE), a versatile approach that generates prediction sets with statistically robust uncertainty estimates instead of point predictions, to enhance model transparency and reliability. CONFINE not only provides example-based explanations and confidence levels for individual predictions but also boosts accuracy by up to 3.6%. We define a new metric, correct efficiency, to evaluate the proportion of prediction sets that contain exactly the correct label and show that CONFINE achieves correct efficiency of up to 3.26% higher than the original accuracy, matching or exceeding prior methods. Adaptable to any pre-trained classifier, CONFINE has proven effective across tasks from image classification to language understanding, marking a significant advance towards transparent and trustworthy deep learning applications in critical domains.

 

CS Grad Calendar:

https://calendar.google.com/calendar/event?action=TEMPLATE&tmeid=Mzlua2x1YTlwYm12bWU0OXZ2YmVxOGlqNzQgYWNnMDc5YmxzbzRtczNza2tmZThwa2lyb2dAZw&tmsrc=acg079blso4ms3skkfe8pkirog%40group.calendar.google.com