[talks] Cyril Zhang will present his generals exam Thursday, May 25, 2017 at 3pm in CS 402
ngotsis at CS.Princeton.EDU
Thu May 18 16:00:00 EDT 2017
Cyril Zhang will present his generals exam Thursday, May 25, 2017 at 3pm in CS 402
The members of his committee are: Elad Hazan (adviser), Sanjeev Arora, and Matt Weinberg
Everyone is invited to attend his talk, and those faculty wishing to remain for the oral exam following are welcome to do so. His abstract and reading list follow below.
Topic: Two Views of Non-Convex Regret Minimization
Abstract: I will discuss the problem of efficient online regret minimization with non-convex loss functions. Two distinct (yet pleasingly nebulously connected) approaches are considered; both results are j oint work with Karan Singh and Prof. Elad Hazan.
In the first part of my talk, I will present a new framework for general online non-convex optimization (ONCO). I will introduce the desiderata of such a framework, showing how we must avert issues of computational intractability by defining a suitable notion of regret. I will introduce our time-smoothed regret measure, and justify it with tight upper and lower bounds, along with reductions to the full-information and stochastic offline cases. Finally, I will show some extensions and applications. Primarily, our framework gives rise to a new solution concept in repeated games; I will discuss some natural settings in which our algorithms are practical and give provable guarantees for learning in games.
In the second part, I will present a second approach, in which one leverages the structure of an ONCO problem to make it amenable to the tools of online convex optimization (OCO). We consider the non-convex problem of maximum-likelihood online prediction of linear dynamical systems (LDS). Relying upon the spectral theory of Hankel matrices, we formulate an approximate convex relaxation to this problem. This results in a black-box reduction to OCO, from which we derive a simple, practical method for predicting sequences arising from an LDS.
- Introduction to Online Convex Optimization. Elad Hazan.
- Understanding Machine Learning. Shai Shalev-Shwartz and Shai Ben-David.
- Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic Programming. Saeed Ghadimi and Guanghui Lan.
- Cubic regularization of Newton method and its global performance. Yurii Nesterov and B.T. Polyak.
- The Multiplicative Weights Update Method: A Meta-Algorithm and Applications. Sanjeev Arora, Elad Hazan, Satyen Kale.
- Online Convex Programming and Generalized Infinitesimal Gradient Ascent. Martin Zinkevich.
- Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. John Duchi, Elad Hazan, and Yoram Singer.
- Accelerating Stochastic Gradient Descent using Predictive Variance Reduction. Rie Johnson and Tong Zhang.
- Efficient algorithms for online decision problems. Adam Kalai and Santosh Vempala.
- Online Linear Optimization via Smoothing. Jacob Abernethy, Chansoo Lee, Abhinav Sinha, and Ambuj Tewari.
- Matrix Completion has No Spurious Local Minimum. Rong Ge, Jason D. Lee, and Tengyu Ma.
- Gradient Descent Learns Linear Dynamical Systems. Moritz Hardt, Tengyu Ma, and Benjamin Recht.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the talks