Juan Duque will present his MSE talk, "Transformer Representations for Efficient Reinforcement Learning" on Thursday, April 21st at 3:00 PM (EDT) via Zoom.

Zoom Link: https://princeton.zoom.us/j/93980023607

Committee Members: Karthik Narasimhan (adviser) and Elad Hazan (reader)

All are welcome to attend.

Transformer Representations for Efficient Reinforcement Learning
Reinforcement Learning algorithms are often trained from scratch at high computational costs. In this talk I will present a self-supervised learning framework that uses the Transformer architecture to generate useful representations for Reinforcement Learning. Our formulation uses objectives borrowed from Natural Language Processing and Imitation Learning to pre-train a Tranformer model from offline trajectories that is then fine-tuned on Atari games. We demonstrate how this setup allows for quick online convergence in under 100 thousand episodes.

Louis Riehl
Graduate Administrator
Computer Science Department, CS213
Princeton University
(609) 258-8014