TODAY! Joint Princeton Optimization Seminar/CSML Seminar: 4:30pm in Sherrerd Hall 101: Elisa Celis, Yale University
Speaker: Elisa Celis Assistant Professor of Statistics and Data Science Yale University Title: Optimizing for Fairness in ML Day: Thursday, November 7, 2019 Time: 4:30-5:30 Room: Sherrerd 101 Host: CSML and ORFE, S. S. Wilks Memorial Seminar in Statistics Bio: Elisa Celis is an Assistant Professor of Statistics and Data Science at Yale University. Elisa’s research focuses on problems that arise at the interface of computation and machine learning and its societal ramifications. Specifically, she studies the manifestation of social and economic biases in our online lives via the algorithms that encode and perpetuate them. Her work spans multiple areas including social computing and crowdsourcing, data science, and algorithm design with a current emphasis on fairness and diversity in artificial intelligence and machine learning. Elisa received her B.Sci. from Harvey Mudd College in 2006 in computer science & mathematics, and a Ph.D. in computer science & engineering from the University of Washington in 2012. Prior to joining Yale, she was a research scientist at Xerox Research India where she managed the crowdsourcing research thrust worldwide, and a senior research scientist at École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland in computer and information sciences, where she was a co-founder of the Computation Nature and Society Think Tank. Abstract: Recent events have made evident the fact that algorithms can be discriminatory, reinforce human prejudices, accelerate the spread of misinformation, and are generally not as objective as they are widely thought to be. In this talk, I will present some vignettes from my recent work addressing the problem of social bias in ML models. In particular, correcting for housing and employment discrimination in online advertising, auditing social and intersectional biases in contextual NLP systems, and diversifying image search results via simple and fast approaches that do not require labeled training data. This work leads to new ML approaches that have the ability to alleviate bias and increase diversity while simultaneously maintaining theoretical or empirical performance with respect to their original metrics.
participants (1)
-
Emily Lawrence