Here are next week's CS Department Colloquium Series talks.  As always, you can find the full calendar of events here: https://www.cs.princeton.edu/general/newsevents/events 


Speaker: Yang Liu, Stanford University
Date: Monday, April 10
Time: 12:30pm EST
Location: CS 105
Host: Gillat Kol
Event page: https://www.cs.princeton.edu/events/26400

Title: Graphs, Optimization, Geometry, and Fast Algorithms

Abstract:  Discrete combinatorial structures such as graphs and Boolean matrices are prevalent in modern computation. The massive size of modern data motivates the design of efficient algorithms for processing these combinatorial datasets. In this talk, I will describe how to use techniques from continuous optimization and geometry to gain insights into the structure of problems in these combinatorial settings. Using these insights, I will present new efficient algorithms for several fundamental problems at the intersection of combinatorial algorithms, continuous optimization, and high-dimensional geometry, including maximum flows in almost linear time, discrepancy minimization, and linear regression. We conclude by discussing the exciting new lines of research and open problems that these techniques have opened up.

Bio: Yang P. Liu is a fifth year PhD student at Stanford advised by Aaron Sidford. He completed his undergraduate studies at MIT in 2018. He has broad interests in computer science, and his research focuses on the design of efficient algorithms based on graph theory, convex optimization, and high-dimensional geometry. His work has been recognized by ITCS and STOC best student paper awards, and a FOCS best paper award.



CITP & CS Colloquium Speaker
Speaker: Amanda Coston, Carnegie Mellon University
Date: Tuesday, April 11
Time: 12:30pm EST
Location: CS 105
Host: Aleksandra Korolova
Event page: https://www.cs.princeton.edu/events/26375

Title: Responsible Machine Learning through the Lens of Causal Inference

Abstract:  Machine learning algorithms are widely used for decision-making in societally high-stakes settings from child welfare and criminal justice to healthcare and consumer lending. Recent history has illuminated numerous examples where these algorithms proved unreliable or inequitable. In this talk I show how causal inference enables us to more reliably evaluate such algorithms’ performance and equity implications. 

In the first part of the talk, I demonstrate that standard evaluation procedures fail to address missing data and as a result, often produce invalid assessments of algorithmic performance. I propose a new evaluation framework that addresses missing data by using counterfactual techniques to estimate unknown outcomes. Using this framework, I propose counterfactual analogues of common predictive performance and algorithmic fairness metrics that are tailored to  decision-making settings. I provide double machine learning-style estimators for these metrics that achieve fast rates & asymptotic normality under flexible nonparametric conditions. I present empirical results in the child welfare setting using data from Allegheny County’s Department of Human Services.

In the second half of the talk, I propose novel causal inference methods to audit for bias in key decision points in contexts where machine learning algorithms are used. A common challenge is that data about decisions are often observed under outcome-dependent sampling. I develop a counterfactual audit for biased decision-making in settings with outcome-dependent data.  Using data from the Stanford Open Policing Project, I demonstrate how this method can identify racial bias in the most common entry point to the criminal justice system: police traffic stops. To conclude, I situate my work in the broader question of governance in responsible machine learning. 

Bio: Amanda Coston is a PhD student in Machine Learning and Public Policy at Carnegie Mellon University. Her research investigates how to make algorithmic decision-making more reliable and more equitable using causal inference and machine learning. Prior to her PhD, she worked at Microsoft, the consultancy Teneo, and the Nairobi-based startup HiviSasa. She earned a B.S.E from Princeton in computer science with a certificate in public policy.  Amanda is a Meta Research PhD Fellow,  K & L Gates Presidential Fellow in Ethics and Computational Technologies, and NSF GRFP Fellow, and has received several Rising Star honors.

This seminar is cosponsored by the Center for Information Technology Policy and the department of Computer Science.



Speaker: Pravesh Kothari, Carnegie Mellon University
Date: Thursday, April 13
Time: 12:30pm EST
Location: CS 105
Host: Mark Braverman
Event page: https://www.cs.princeton.edu/events/26390

Title: Towards a Unified Approach to Average-Case Algorithm Design

Abstract: Solving non-convex optimization problems on probabilistic models of inputs lies at the heart of foundational algorithmic challenges arising in high-dimensional statistical data analysis, beyond-worst-case combinatorial optimization, cryptography, and statistical physics.

In this talk, I will present a new method for average-case algorithm design that relies on a concrete polynomial time meta-algorithm called the sum-of-squares method. This method yields substantially improved and often nearly optimal guarantees for a wide range of problems.

I will focus on the impact of this method on two prominent areas of average-case algorithm design:

1) High-dimensional statistical estimation, where this method has led to efficient algorithms for classical data analysis tasks that provably tolerate adversarial data corruption while incurring minimal possible error. The resulting applications range from new robust estimators in high dimensions for basic tasks such as computing mean, covariance, and moments of data to more sophisticated tasks such as regression, clustering, sparse recovery, and fitting mixture models. Most recently, this theory led to the first efficient algorithm for robustly learning a high-dimensional mixture of Gaussians. This resolves a central open question in the area, which has a history going back to a famous work of Pearson from 1894. 

2) Beyond worst-case combinatorial optimization, where this method has led to new efficient algorithms that escape worst-case hardness while avoiding "overfitting" to brittle properties of any specific random model. Most recently, this line of work resulted in a resolution of longstanding open questions of finding optimal algorithms for "smoothed" models of k-SAT and "semirandom" models of Max-Clique. 

Taken together, these results suggest a unified theory for average-case algorithm design that not only makes substantial progress on long open foundational challenges but also brings a conceptual unity to algorithm design that we had never anticipated.

Bio: Pravesh Kothari is an Assistant Professor of Computer Science at Carnegie Mellon University since September 2019. Before joining CMU, he was a postdoctoral Research Instructor jointly hosted by Princeton University and the Institute for Advanced Study from 2016-19. He obtained his Ph.D. in 2016 from the University of Texas at Austin. Kothari's recent work has focused on algorithm design for problems with statistical inputs. It is also the subject of his recent monograph "Semialgebraic Proofs and Efficient Algorithm Design". His research has been recognized with a Simons Award for graduate students in Theoretical Computer Science, a Google Research Scholar Award, an NSF CAREER Award, and an Alfred P. Sloan Research Fellowship.