Kaiyu Yang will present his FPO "Neurosymbolic Machine Learning for Reasoning" on Thursday, July 28, 2022 at 3:00 PM in Friend 125 and Zoom.

Location: Zoom link: https://princeton.zoom.us/j/2731344683

The members of Kaiyu’s committee are as follows:
Examiners: Jia Deng (Adviser), Olga Russakovsky, Danqi Chen
Readers: Karthik Narasimhan, Mayur Naik (University of Pennsylvania)

A copy of his thesis is available upon request.  Please email gradinfo@cs.princeton.edu if you would like a copy of the thesis. 
 
Everyone is invited to attend his talk. 
 
Abstract follows below:
Machine learning has made significant progress in the past decade. Its most successful paradigm, deep neural networks, consists of layers of continuous representation whose parameters are optimized on massive datasets via gradient descent. Deep neural networks have achieved superior performance on numerous tasks, such as object recognition, language understanding, and autonomous driving. However, they still struggle with reasoning tasks, which often require manipulating symbols and chaining multiple steps compositionally, e.g., solving math equations or writing computer programs. In this dissertation, we aim to bridge this gap and teach machines to reason—in ways that are precise, systematic, interpretable, and robust to ambiguity in real-world environments.

To that end, we take a neurosymbolic approach combining the complementary strengths of machine learning and symbolic reasoning. Symbolic reasoning is precise and generalizes systematically. But it has been restricted to domains amenable to rigid formalization. In contrast, predominant machine learning methods are flexible but notoriously uninterpretable, data-hungry, and incapable of generalizing outside the training distribution. Integrating the strengths of both approaches is essential for building flexible reasoning machines with precise and systematic generalization.

Concretely, this dissertation addresses neurosymbolic reasoning from two angles. First, we apply machine learning to tasks related to symbolic reasoning, such as automated theorem proving (Chapter 2). Second, we introduce inductive biases inspired by symbolic reasoning into machine learning models to improve their interpretability, generalization, and data efficiency (Chapter 3 and Chapter 4). Our results highlight the benefits of (1) neurosymbolic model architectures, (2) reasoning at a suitable level of abstraction, and (3) an explicit, compositional representation of reasoning, such as symbolic proofs.


Louis Riehl
Graduate Administrator
Computer Science Department, CS213
Princeton University
(609) 258-8014