Juan Duque will present his MSE Talk "Transformer Representations for Efficient Reinforcement Learning" on Thursday, April 21, 2022 at 3pm via zoom.

Committee: Karthik Narasimhan (adviser) and Elad Hazan (reader)

Zoom link: https://princeton.zoom.us/j/93980023607

All are welcome to attend.

Abstract:
Reinforcement Learning algorithms are often trained from scratch at high computational costs. In this talk I will present a self-supervised learning framework that uses the Transformer architecture to generate useful representations for Reinforcement Learning. Our formulation uses objectives borrowed from Natural Language Processing and Imitation Learning to pre-train a Tranformer model from offline trajectories that is then fine-tuned on Atari games. We demonstrate how this setup allows for quick online convergence in under 100 thousand episodes.