Paul Krueger will present his FPO "Metacognition: toward a computational framework for improving our minds" on Wednesday, January 18, 2023 at 2:00 PM in CS 402.

 

Location: CS 402

 

The members of Paul’s committee are as follows:

Examiners: Tom Griffiths (Adviser), Jonathan Cohen, Ryan Adams

Readers: Karthik Narasimhan, Nathaniel Daw

 

A copy of his thesis will be available two weeks before the FPO upon request.  Please email gradinfo@cs.princeton.edu if you would like a copy of the thesis.

 

Everyone is invited to attend his talk.

 

Abstract follows below:

Reinforcement learning has been used successfully to account for human decision-making and to produce intelligent behavior in artificial agents. However, people’s internal cognitive processes generally remain obscure with these models. The same formalism used to account for external actions can also be applied to cognitive processes themselves. In particular, using the recently developed framework of resource rationality we can describe how people make efficient use of limited cognitive resources. Using this approach, we apply tools from machine learning to derive a normative account of how people ought to think efficiently. As a proof of concept, I will show how this framework can be used to automatically derive decision-making heuristics. I will present results from a classic multi-alternative risky choice task that allows us to trace people’s cognitive processes. Using our method, we both re-discovered previously known heuristics, and discovered heuristics that had been overlooked. We found that people did indeed use the same heuristics that our method predicted, and furthermore adapted their heuristics to the structure of the decision environment in a rational way. Our approach bridges classic rationality with research on heuristics and biases by providing a general framework that can be used to apply rationality to any decision-making process while accounting for heuristics, thus providing a more realistic normative standard. In the second part of the talk, I will discuss a new framework for how model-based and model-free reinforcement learning systems can interact cooperatively, and the potential to account for human decision-making with this approach. In particular, this cooperation may be used to mediate the tension between goals and habits. I will close by discussing ongoing work to develop tools to solve meta-MDPs, the formalism used for our resource-rational approach.