Cyril Zhang will present his FPO "Regret-Minimizing Algorithms Beyond Classical Optimization and Control" on Thursday, 9/10/2020 at 4:40pm via Zoom.

Link to Zoom: https://princeton.zoom.us/j/95603512563

The members of his committee are as follows: Readers: Elad Hazan (Adviser) and Yoram Singer; Examiners: Karthik Narasimhan, Mark Braverman, and Sanjeev Arora

A copy of his thesis, is available upon request. Please email ngotsis@cs.princeton if you would like a copy of the thesis.

Everyone is invited to attend his talk.  Abstract follows below.

Abstract:

Temporally correlated data and non-convex programs are two core challenges which lie at the frontier

of reconciling theory and practice in machine learning. In this thesis, we present varied perspectives

on algorithms and provable learning guarantees in stateful and non-convex environments, such as

those encountered in reinforcement learning, control theory, time-series analysis, and deep learning.

These ideas are unified by the framework of online convex optimization, which provides robust

algorithmic primitives and regret guarantees under minimal assumptions on the data.

The recurring theme of leveraging regret minimization beyond classical settings will manifest in

several ways, eclectic in their scope. First, we develop a solution concept and ecient algorithms

for online non-convex optimization in its most generic form. Next, in a line of theoretical work

on prediction and control in the presence of linear time-invariant state transitions, beyond the

linear-quadratic-Gaussian assumptions of classical control theory, we use online learning with convex

relaxations to bypass the computational and statistical brittleness of the usual system identification

pipelines. Finally, we use online convex optimization as a testbed for principled algorithm design,

towards improving and better understanding ubiquitous training heuristics in deep learning.