CS Colloquium Speaker

Kevin Ellis, Massachusetts Institute of Technology

Tuesday, February 11 - 12:30pm

Computer Science - Room 105

Host: Tom Griffiths

https://www.cs.princeton.edu/events/25918

 

Title: Building Machines that Discover Generalizable, Interpretable Knowledge

 

Humans can learn to solve an endless range of problems: building, drawing, designing, coding, and cooking, to name a few, and need relatively modest amounts of experience to acquire any one new individual skill. Machines which can similarly master a diverse span of problems are surely far off.

 

Here, however, I will argue that program induction--an emerging AI technique--will play a role in building this more human-like AI. Program induction systems represent knowledge as programs, and learn by synthesizing code. Across three case studies in vision, natural language, and learning-to-learn, this talk will present program induction systems that take a step toward machines that can: acquire new knowledge from modest amounts of experience; strongly generalize that knowledge to extrapolate beyond their training; learn to represent their knowledge in an interpretable format; and are applicable to a broad spread of problems, from drawing pictures to discovering equations. Driving these developments is a new neuro-symbolic algorithm for Bayesian program synthesis. This algorithm integrates maturing program synthesis technologies with several complementary AI traditions (symbolic, probabilistic, and neural).

 

Building a human-like machine learner is a very distant, long-term goal for the field. In the near-term program induction comes with a roadmap of practical problems to push, such as language learning, scene understanding, and programming-by-examples, which this talk explores. But it's worth keeping these long-term goals in mind as well.

 

Bio: Kevin Ellis works across artificial intelligence, program synthesis, and machine learning. He develops learning algorithms that teach machines to write code, and applies these algorithms to problems in artificial intelligence. His work has appeared in machine learning venues (NeurIPS, ICLR, IJCAI) and cognitive science venues (CogSci, TOPICS). He has collaborated with researchers at Harvard, Brown, McGill, Siemens, and MIT, where he is a final-year graduate student advised by Josh Tenenbaum and Armando Solar-Lezama.