[Ml-stat-talks] Princeton Optimization Seminar: Katya Scheinberg, TODAY, 4:30 PM, Sherrerd 101

Amir Ali Ahmadi a_a_a at princeton.edu
Thu Apr 7 01:27:11 EDT 2016


-----   Princeton Optimization Seminar   -----

DATE: TODAY (April 7, 2016)

TIME:  4:30 PM

LOCATION:  Sherrerd Hall 101

SPEAKER: Katya Scheinberg, Lehigh University

TITLE: Using Randomized Models in Black-box, Derivative Free and Stochastic Optimization<https://orfe.princeton.edu/abstracts/optimization-seminar/using-randomized-models-black-box-derivative-free-and-stochastic>

Abstract:
Derivative free optimization (DFO) is the field that addresses optimization of black-box functions - that is functions whose value can be computed (possibly approximately) but whose derivatives cannot be approximated directly.  The applications of DFO range from aircraft engineering to hyperparameter tuning in machine learning. All derivative free methods rely on sampling the objective function at one or more points at each iteration. Constructing and maintaining these sample sets has been one of the most essential issues in DFO.  Majority of the existing results rely on deterministic sampling techniques.

We will discuss the new developments  for using randomized sampled sets within the DFO framework. Randomized sample sets have  many advantages over the deterministic sets. In particular, it is often easier to enforce "good" properties of the models with high probability, rather than the in the worst case.  In addition, randomized sample sets can help automatically discover a good local low dimensional approximation to the high dimensional objective function. We will demonstrate how compressed sensing results can be used to show that reduced size random sample sets can provide full second order information under the assumption of the sparsity of the Hessian.

We will discuss  convergence theory developed for the randomized models where we can, for instance, show that as long as the models are "good" with probability more than 1/2 then our trust region framework is globally convergent with probability 1 under standard assumptions. Some  extensions to a broad class of machine learning models will also be discussed.

Bio:
Katya Scheinberg is the Harvey E. Wagner Endowed Chair Professor at the Industrial and Systems Engineering Department at Lehigh University. Prior to her current position, Katya was a staff member at the IBM T.J. Watson Research Center for over a decade. She received her PhD degree from IEOR Department at Columbia and undergraduate degree from Moscow University.
Her main research interests lie broadly in continuous optimization, focusing on convex optimization, derivative free optimization, and large-scale methods for Big Data and Machine Learning applications. Katya is currently the Editor-in-Chief of the SIAM-MOS Series on Optimization and an associate editor of the SIAM Journal on Optimization. She is a recent recipient of the Lagrange Prize, along with Andrew R. Conn and Luis Nunes Vicente, for their  book “Introduction to Derivative-Free Optimization”. Katya’s research is supported by grants from AFOSR, DARPA, NSF, and Yahoo.


*** If you would like to subscribe to the mailing list of the optimization seminar series, please visit the link below or send an email with
SUBSCRIBE opt-seminar in the body to listserv at lists.princeton.edu<mailto:listserv at lists.princeton.edu>.

https://lists.princeton.edu/cgi-bin/wa?SUBED1=opt-seminar&A=1
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cs.princeton.edu/pipermail/ml-stat-talks/attachments/20160407/cd1375c6/attachment.html>


More information about the Ml-stat-talks mailing list