[talks] Naman Agarwal will present his general exam on Monday, May 9th, 2016 at 3pm in CS 302.

Nicki Gotsis ngotsis at CS.Princeton.EDU
Mon May 2 14:20:17 EDT 2016


Naman Agarwal will present his general exam on Monday, May 9th, 2016 at 3pm in CS 302.

The members of his committee are Elad Hazan (adviser), Sanjeev Arora, and Barbara Engelhardt.

Everyone is invited to attend his talk, and those faculty wishing to remain for the oral exam following are welcome to do so.  His abstract and reading list follow below.

Abstract 

Stochastic optimization and, in particular, first-order stochastic methods are a cornerstone of modern machine learning due to their extremely efficient per-iteration computational cost. Second-order methods, while able to provide faster per-iteration convergence, have been much less explored due to the high cost of computing the second-order information. In this talk I will present a second-order stochastic method for optimization problems arising in machine learning based on novel matrix randomization techniques that match the per-iteration cost of gradient descent, yet enjoy the linear-convergence properties of second-order optimization. We further show that the algorithm can be coupled generically with known first order algorithms and matrix sampling techniques to match the best known theoretical guarantees as well as improve these guarantees for a broad class of parameters.  The algorithm also adapts well in the case when inputs are sparse . I will also demonstrate the efficacy of the algorithm empirically on several standard convex benchmarks. 

This is joint work with Brian Bullins and Elad Hazan


Reading List 

Books/Lecture Notes - 

Introduction to Online Convex Optimization : E. Hazan

Numerical Optimization: Jorge Nocedal and Stephen J. Wright

Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems : S. Bubeck and N. Cesa-Bianchi 

Research Papers - 

Adaptive Subgradient Methods for Online Learning and Stochastic Optimization: J. Duchi, E. Hazan and Y. Singer

Accelerating Stochastic Gradient Descent using Predictive Variance Reduction : R. Johnson and T. Zhang

Convergence rates of sub-sampled Newton methods : M. Erdogdu and A. Montanari

Escaping from Saddle Points — Online Stochastic Gradient Descent for Tensor Decomposition : R. Ge, F. Huang, C. Jin, Y. Yuan

Cubic regularization of Newton method and its global performance : Y. Nesterov and B.T. Polyak

On the saddle point problem for non-convex optimization, R. Pascanu, Y.N. Dauphin, S. Ganguli and Y. Bengio

Fast and Simple PCA via Convex Optimization : D. Garber and E. Hazan 

On Graduated Optimization for Stochastic Non-Convex Problems : E. Hazan, K. Levi and S. Shalev-Shwartz

Competing in the Dark: An Efficient Algorithm for Bandit Linear Optimization : J. Abernety, E. Hazan and A. Rakhlin

The price of Bandit Optimization : V. Dani, S. M. Kakade and T. P. Hayes

Towards Minimax Policies for Online Linear Optimization with Bandit Feedback : S. Bubeck, N. Cesa-Bianchi and S. M. Kakade

Faster Convex Optimization: Simulated Annealing with an Efficient Universal Barrier : J. Abernathy and E. Hazan

Clustering with Bregman Divergences : A. Banerjee, S. Merugu, I. S. Dhillon and J. Ghosh


More information about the talks mailing list