Naman Agarwal will present his FPO "Second-Order Optimization Methods for Machine Learning" on Wednesday, 7/18/2018 at 2:00 PM in CS 402 

The members of his committee are as follows:  Readers:  Yoram Singer and Yuxin Chen;  Nonreaders:  Elad Hazan (Adviser),  Sanjeev Arora and Mark Braverman

All are welcome to attend.  Please see below for abstract.

In recent years first-order stochastic methods have emerged as the state-of-the-art in largescale
machine learning optimization. This is primarily due to their efficient per-iteration
complexity. Second-order methods, while able to provide faster convergence, are less
popular due to the high cost of computing the second-order information. The main problem
considered in this thesis is can efficient second-order methods for optimization problems
arising in machine learning be developed that improve upon the best known first-order
methods.
We consider the ERM model of learning and propose linear time second-order algorithms
for both convex as well as non-convex settings which improve upon the state-of-the-art
first-order algorithms. In the non-convex setting second-order methods are also shown to
converge to better quality solutions efficiently. For the convex case the proposed algorithms
make use of a novel estimator for the inverse of a matrix and better sampling techniques
for stochastic methods derived out of the notion of leverage scores. For the non-convex
setting we propose an efficient implementation of the cubic regularization scheme proposed
by Nesterov and Polyak.
Furthermore we develop second-order methods for achieving approximate local minima
on Riemannian manifolds which match the convergence rate of their Euclidean counterparts.
Finally we show the limitations of second/higher-order methods by deriving oracle
complexity lower bounds for such methods on sufficiently smooth convex functions.