CS Colloquium Speaker Rika Antonova TODAY at 12:30pm
Speaker: Rika Antonova, Stanford University Date: Tuesday, April 4 Time: 12:30pm EST Location: CS 105 Host: Szymon Rusinkiewicz Event page: https://www.cs.princeton.edu/events/26378 Title: Enabling Self-Sufficient Robot Learning Abstract: Autonomous exploration and data-efficient learning are important ingredients for helping machine learning handle the complexity and variety of real-world interactions. In this talk, I will describe methods that provide these ingredients and serve as building blocks for enabling self-sufficient robot learning. First, I will outline a family of methods that facilitate active global exploration. Specifically, they enable ultra data-efficient Bayesian optimization in reality by leveraging experience from simulation to shape the space of decisions. In robotics, these methods enable success with a budget of only 10-20 real robot trials for a range of tasks: bipedal and hexapod walking, task-oriented grasping, and nonprehensile manipulation. Next, I will describe how to bring simulations closer to reality. This is especially important for scenarios with highly deformable objects, where simulation parameters influence the dynamics in unintuitive ways. The success here hinges on finding a good representation for the state of deformables. I will describe adaptive distribution embeddings that provide an effective way to incorporate noisy state observations into modern Bayesian tools for simulation parameter inference. This novel representation ensures success in estimating posterior distributions over simulation parameters, such as elasticity, friction, and scale, even for scenarios with highly deformable objects and using only a small set of real-world trajectories. Lastly, I will share a vision of using distribution embeddings to make the space of stochastic policies in reinforcement learning suitable for global optimization. This research direction involves formalizing and learning novel distance metrics on this space and will support principled ways of seeking diverse behaviors. This can unlock truly autonomous learning, where learning agents have incentives to explore, build useful internal representations and discover a variety of effective ways of interacting with the world. Bio: Rika is a postdoctoral scholar at Stanford University and a recipient of the NSF/CRA Computing Innovation Fellowship. Rika completed her Ph.D. work on data-efficient simulation-to-reality transfer at KTH. Earlier, she obtained a research Master's degree from the Robotics Institute at Carnegie Mellon University. Before that, Rika was a software engineer at Google, first in the Search Personalization group, then in the Character Recognition team (developing open-source OCR engine Tesseract).
CS & CITP Colloquium Speaker Speaker: Amanda Coston, Carnegie Mellon University Date: Tuesday, April 11 Time: 12:30pm EST Location: CS 105 Host: Aleksandra Korolova Event page: https://www.cs.princeton.edu/events/26375 Title: Responsible Machine Learning through the Lens of Causal Inference Abstract: Machine learning algorithms are widely used for decision-making in societally high-stakes settings from child welfare and criminal justice to healthcare and consumer lending. Recent history has illuminated numerous examples where these algorithms proved unreliable or inequitable. In this talk I show how causal inference enables us to more reliably evaluate such algorithms’ performance and equity implications. In the first part of the talk, I demonstrate that standard evaluation procedures fail to address missing data and as a result, often produce invalid assessments of algorithmic performance. I propose a new evaluation framework that addresses missing data by using counterfactual techniques to estimate unknown outcomes. Using this framework, I propose counterfactual analogues of common predictive performance and algorithmic fairness metrics that are tailored to decision-making settings. I provide double machine learning-style estimators for these metrics that achieve fast rates & asymptotic normality under flexible nonparametric conditions. I present empirical results in the child welfare setting using data from Allegheny County’s Department of Human Services. In the second half of the talk, I propose novel causal inference methods to audit for bias in key decision points in contexts where machine learning algorithms are used. A common challenge is that data about decisions are often observed under outcome-dependent sampling. I develop a counterfactual audit for biased decision-making in settings with outcome-dependent data. Using data from the Stanford Open Policing Project, I demonstrate how this method can identify racial bias in the most common entry point to the criminal justice system: police traffic stops. To conclude, I situate my work in the broader question of governance in responsible machine learning. Bio: Amanda Coston is a PhD student in Machine Learning and Public Policy at Carnegie Mellon University. Her research investigates how to make algorithmic decision-making more reliable and more equitable using causal inference and machine learning. Prior to her PhD, she worked at Microsoft, the consultancy Teneo, and the Nairobi-based startup HiviSasa. She earned a B.S.E from Princeton in computer science with a certificate in public policy. Amanda is a Meta Research PhD Fellow, K & L Gates Presidential Fellow in Ethics and Computational Technologies, and NSF GRFP Fellow, and has received several Rising Star honors.
Speaker: Pravesh Kothari, Carnegie Mellon University Date: Thursday, April 13 Time: 12:30pm EST Location: CS 105 Host: Mark Braverman Event page: https://www.cs.princeton.edu/events/26390 Title: Towards a Unified Approach to Average-Case Algorithm Design Abstract: Solving non-convex optimization problems on probabilistic models of inputs lies at the heart of foundational algorithmic challenges arising in high-dimensional statistical data analysis, beyond-worst-case combinatorial optimization, cryptography, and statistical physics. In this talk, I will present a new method for average-case algorithm design that relies on a concrete polynomial time meta-algorithm called the sum-of-squares method. This method yields substantially improved and often nearly optimal guarantees for a wide range of problems. I will focus on the impact of this method on two prominent areas of average-case algorithm design: 1) High-dimensional statistical estimation, where this method has led to efficient algorithms for classical data analysis tasks that provably tolerate adversarial data corruption while incurring minimal possible error. The resulting applications range from new robust estimators in high dimensions for basic tasks such as computing mean, covariance, and moments of data to more sophisticated tasks such as regression, clustering, sparse recovery, and fitting mixture models. Most recently, this theory led to the first efficient algorithm for robustly learning a high-dimensional mixture of Gaussians. This resolves a central open question in the area, which has a history going back to a famous work of Pearson from 1894. 2) Beyond worst-case combinatorial optimization, where this method has led to new efficient algorithms that escape worst-case hardness while avoiding "overfitting" to brittle properties of any specific random model. Most recently, this line of work resulted in a resolution of longstanding open questions of finding optimal algorithms for "smoothed" models of k-SAT and "semirandom" models of Max-Clique. Taken together, these results suggest a unified theory for average-case algorithm design that not only makes substantial progress on long open foundational challenges but also brings a conceptual unity to algorithm design that we had never anticipated. Bio: Pravesh Kothari is an Assistant Professor of Computer Science at Carnegie Mellon University since September 2019. Before joining CMU, he was a postdoctoral Research Instructor jointly hosted by Princeton University and the Institute for Advanced Study from 2016-19. He obtained his Ph.D. in 2016 from the University of Texas at Austin. Kothari's recent work has focused on algorithm design for problems with statistical inputs. It is also the subject of his recent monograph "Semialgebraic Proofs and Efficient Algorithm Design". His research has been recognized with a Simons Award for graduate students in Theoretical Computer Science, a Google Research Scholar Award, an NSF CAREER Award, and an Alfred P. Sloan Research Fellowship.
participants (1)
-
Emily C. Lawrence