Karan Singh will present his FPO "The Nonstochastic Control Problem" on Tuesday, November 30, 2021 at 11AM via Zoom.

 

Zoom link: https://princeton.zoom.us/meeting/97274954815

 

The members of his committee are as follows:

Examiners: Elad Hazan (Adviser), Anirudha Majumdar (MAE), Sanjeev Arora

Readers: Karthik Narasimhan, Sham Kakade (University of Washington, Microsoft Research)

 

A copy of Karan’s thesis is available upon request. Please email gradinfo@cs.princeton.edu if you would like a copy of the thesis.

 

Everyone is invited to attend the talk.

 

Abstract follows below:

 

Controlling a dynamical system is a fundamental problem, with wide-reaching applications in autonomous systems, engineering, and sciences. This thesis outlines algorithms and their analyses for feedback-driven learning that make progress on several fronts: robustness, adaptivity, generality, and sample efficiency. The focal point of this thesis is the introduction of the nonstochastic control framework, which builds an algorithmic, against the traditionally analytic, foundation for control theory. This segment provides robust algorithmic primitives for basic control subroutines. Concretely, it presents: 1. An efficient algorithm for controlling linear systems in the presence of nonstochastic perturbations with an instance-optimal regret guarantee. Beyond the realm of both robust and stochastic control, such a data-driven notion of optimality offers worst-case guarantees with a promise of exceptional performance on benign problem instances; 2. The first logarithmic regret result for the classical tracking problem, using optimization-based tools that go beyond dynamic programming based approaches; 3. Extensions of the above to the case of unknown system dynamics. The second part of the thesis addresses the challenge of learning to act in non-stationary environments given a handful of trials, a version of the sim-to-real challenge. It provides a provable algorithm that negotiates this challenge using the algorithmic foundation delineated above, and demonstrates its efficacy over known approaches in non-linear settings. The final segment concerns the generalization of the notion of feedback in interactive learning beyond scalar rewards. It provides an efficient reductions-based approach to tackling concave reward functionals, that may increasingly be found in underlying exploration subroutines. This is accomplished via a novel generalization of classical boosting and gradient boosting techniques to the online convex optimization setting.