Ted Sumers will present his FPO "Grounding Communication in Real-World Action" on Wednesday, April 24, 2024 at 11:00 AM in CS 301
Ted Sumers will present his FPO "Grounding Communication in Real-World Action" on Wednesday, April 24, 2024 at 11:00 AM in CS 301 Location: CS 301 The members of Ted's committee are as follows: Examiners: Tom Griffiths (Adviser), Ryan Adams, Adele Goldberg Readers: Karthik Narasimhan, Dylan Hadfield-Menell (MIT), Tom Griffiths (Adviser) A copy of his thesis is available upon request. Please email gradinfo@cs.princeton.edu mailto:gradinfo@cs.princeton.edu if you would like a copy of the thesis. Everyone is invited to attend his talk. Abstract follows below: This dissertation bridges psychology and artificial intelligence (AI) to develop agents capable of learning through communication with humans. The first half establishes a foundation by comparing the efficacy of language and demonstration for transmitting complex concepts. Experiments reveal language's superior ability to convey abstract rules, suggesting its importance for social learning. I then connect computational models of pragmatic language understanding to reinforcement learning settings, grounding a speaker's utility in their listener's decision problem. Behavioral evidence validates this as a model of human language use. Building on these insights, the second half develops AI agents capable of learning from such language. I first extend the computational model to incorporate both commands and teaching. Experiments show this allows an AI listener to robustly infer the human's latent reward function. I then introduce the problem of learning from fully natural language and contribute two novel approaches: utilizing aspect-based sentiment analysis and a inference network learned end-to-end. Behavioral evaluations demonstrate these models successfully learn from interactive human feedback. Together, this dissertation provides a formal computational theory of the cognitive mechanisms supporting human social learning and embeds them in artificial agents. I discuss implications both for large language models and the continued development of AI agents that acquire and use information through genuine dialogue. This work suggests that building machines to learn as humans do - socially and linguistically - is a promising path towards beneficial artificial intelligence.
participants (1)
-
gradinfo@cs.princeton.edu