Sinong Geng will present his FPO "Model‐Regularized Machine Learning for Decision‐Making" on Thursday, April 13, 2023 at 2:30 PM in COS 402 and Zoom.

Location: Zoom link: https://princeton.zoom.us/j/95544518239

The members of Sinong’s committee are as follows:
Examiners: Ronnie Sircar (Adviser), Ryan Adams, Karthik Narasimhan
Readers: Sanjeev Kulkarni, Tom Griffiths

A copy of his thesis is available upon request. Please email gradinfo@cs.princeton.edu if you would like a copy of the thesis.

Everyone is invited to attend his talk.

Abstract follows below:
Thanks to the availability of more and more high‐dimensional data, recent developments in machine learning (ML) have redefined decision‐making in numerous domains. However, the battle against the unreliability of ML in decision‐making caused by the lack of high‐quality data has not ended and is an important obstacle in almost every application. Some questions arise like (i) Why does an ML method fail to replicate the decision‐making behaviors in a new environment? (ii) Why does ML give unreasonable interpretations for existing expert decisions? (iii) How to make decisions under a noisy and high‐dimensional environment? Many of these issues can be attributed to the lack of an effective and sample‐efficient model underlying ML methods.

This thesis presents our research efforts dedicated to developing model‐regularized ML for decision‐making to address the above issues in areas of inverse reinforcement learning and reinforcement learning, with applications to customer/company behavior analysis and portfolio optimization. Specifically, by applying regularizations derived from suitable models, we propose methods for two different goals: (i) to better understand and replicate existing decision‐making of human experts and businesses; (ii) to conduct better sequential decision‐making, while overcoming the need for large amounts of high‐quality data in situations where there might not be enough.