Challenges and Opportunities in Security & Privacy in Machine Learning

Today's talkChuan Guo (Meta AI)
Time: 1:00pm Eastern Time
Title: Bounding Training Data Reconstruction in Private (Deep) Learning

Abstract: Differential privacy is widely accepted as the de facto method for preventing data leakage in ML, and conventional wisdom suggests that it offers strong protection against privacy attacks. However, existing semantic guarantees for DP focus on membership inference, which may overestimate the adversary's capabilities and is not applicable when membership status itself is non-sensitive. In this talk, we derive the first semantic guarantees for DP mechanisms against training data reconstruction attacks under a formal threat model. We show that two distinct privacy accounting methods -- Rényi differential privacy and Fisher information leakage -- both offer strong semantic protection against data reconstruction attacks.

Bio: Chuan Guo is a Research Scientist on the Fundamental AI Research (FAIR) team at Meta. He received his PhD from Cornell University, and his M.S. and B.S. degrees in computer science and mathematics from the University of Waterloo in Canada. His research interests lie in machine learning privacy and security, with recent works centering around the subjects of privacy-preserving machine learning, federated learning, and adversarial robustness. In particular, his work on privacy accounting using Fisher information leakage received the Best Paper Award at UAI in 2021.

Mailing list: Link to mailing list 
Calendar: Link to calendar

You can find all additional details on the website. If you are interested, we recommend signing up for the mailing list and sync the calendar to stay up to date with the seminar schedule.