Karan Singh will present his Pre FPO "The Nonstochastic Control Problem" on Monday, December 14, 2020 at 11am via Zoom.

Zoom link:
https://princeton.zoom.us/j/96758411857

The members of his committee are as follows:

Examiners: Elad Hazan (advisor), Sanjeev Arora, Anirudha Majumdar (MAE)
Readers: Karthik Narasimhan, Sham Kakade (University of Washington)

Title: The Nonstochastic Control Problem

Abstract: Control in dynamical systems is a fundamental problem, with wide-reaching applications in autonomous systems, engineering, and sciences. This thesis outlines algorithms and their analyses for feedback-driven learning that make progress on several fronts: robustness, adaptivity, generality, and sample efficiency.

The focal point of this thesis is the introduction of the nonstochastic control framework, which builds an algorithmic (against the traditionally analytic) foundation for control theory. This segment provides robust algorithmic primitives for basic control subroutines. Concretely, it presents:

An efficient algorithm for controlling linear systems in the presence of nonstochastic perturbations with an instance-optimal regret guarantee. Beyond the realm of both robust and stochastic control, such a data-driven notion of optimality offers worst-case guarantees with a promise of exceptional performance on benign problem instances;

The first logarithmic regret result for the classical tracking problem, using optimization-based tools that go beyond dynamic programming based approaches;

Extensions of the above to unknown and partially observed systems.

The second part of the thesis addresses the challenge of learning to act in non-stationary environments given a handful of trials, a version of the sim-to-real challenge. It provides a provable algorithm that negotiates this challenge using the algorithmic foundation delineated above, and demonstrates its efficacy over known approaches in non-linear settings.

The final segment concerns the generalization of the notion of feedback in interactive learning beyond scalar rewards. It provides an efficient reductions-based approach to tackling concave reward functionals, that may increasingly be found in underlying exploration subroutines. This is accomplished via a novel generalization of classical boosting and gradient boosting to the online convex optimization setting.