[Ml-stat-talks] FW: [ORFE-Seminars] ORFE Colloquium: Elad Hazan, September 20th at 4:30pm, Sherrerd Hall 101

Amir Ali Ahmadi a_a_a at princeton.edu
Tue Sep 13 11:43:05 EDT 2016


Dear all,

The talk below may be of interest to optimizers, statisticians, and machine learners. Please join us for the opening colloquium of the ORFE department this semester.

Best,
-Amirali


________________________________________
From: ORFE Talks [ORFE-TALKS at Princeton.EDU] on behalf of Carol Smith [carols at PRINCETON.EDU]
Sent: Tuesday, September 13, 2016 10:41 AM
To: ORFE-TALKS at Princeton.EDU
Subject: [ORFE-Seminars] ORFE Colloquium: Elad Hazan, September 20th at 4:30pm, Sherrerd Hall 101

=== ORFE Colloquium Announcement ===

DATE:  Tuesday, September 20, 2016

TIME:  4:30pm

LOCATION:  Sherrerd Hall, room 101

SPEAKER:  Elad Hazan, Princeton University

TITLE:   Second-order Stochastic Optimization for Machine Learning in
Linear Time

ABSTRACT: Stochastic gradient-based methods are the state-of-the-art in
large-scale machine learning optimization due to their extremely
efficient per-iteration computational cost. Second-order methods, that
use the second derivative of the optimization objective, are known to
enable faster convergence. However, the latter have been much less
explored due to the high cost of computing the second-order information.
We will present a second-order stochastic method for optimization
problems arising in machine learning that match the per-iteration cost
of gradient descent, yet enjoy convergence properties of second-order
optimization.
Joint work with Naman Agarwal and Brian Bullins.

BIO: Elad Hazan is a professor of computer science at Princeton
university. He joined in 2015 from the Technion, where he had been an
associate professor of operations research. His research focuses on the
design and analysis of algorithms for basic problems in machine learning
and optimization. Amongst his contributions are the co-development of
the AdaGrad algorithm for training learning machines, and the first
sublinear-time algorithms for convex optimization. He is the recipient
of (twice) the IBM Goldberg best paper award in 2012 for contributions
to sublinear time algorithms for machine learning, and in 2008 for
decision making under uncertainty, a European Research Council grant , a
Marie Curie fellowship and a Google Research Award (twice). He serves on
the steering committee of the Association for Computational Learning and
has been program chair for COLT 2015.


More information about the Ml-stat-talks mailing list