[Ml-stat-talks] Fwd: [ORFE-Seminars] Wilks Statistics Seminar: Joan Bruna, Friday, Mar. 17, 2017 12:30 PM, Sherrerd Hall 101

Barbara Engelhardt bee at princeton.edu
Fri Mar 10 16:39:50 EST 2017

Talk of interest.

***   Wilks Statistics Seminar   ***

DATE:  Friday, March 17, 2017

TIME:   12:30 pm

LOCATION:   Sherrerd Hall 101

SPEAKER:  Joan Bruna, New York University

TITLE:   Addressing Computational and Statistical Gaps with Deep Neural

ABSTRACT:   Many modern statistical questions are plagued with asymptotic
regimes that separate our current theoretical understanding with what is
possible given finite computational and sample resources. Two important
examples of such gaps appear in sparse inference and high-dimensional
nonconvex optimisation. In the former, proximal splitting algorithms
efficiently solve the l1-relaxed sparse coding problem, but their
performance is typically evaluated in terms of asymptotic convergence
rates. In the latter, a major challenge is to explain the excellent
empirical performance of stochastic gradient descent when training large
neural networks.

In this talk we will illustrate how Deep architectures can be used in order
to attack such gaps. We will first see how a neural network sparse coding
model (LISTA, Gregor & LeCun’10) can be analyzed in terms of a particular
matrix factorization of the dictionary, which leverages diagonalisation
with invariance of the l1 ball, revealing a phase transition that is
consistent with numerical experiments. We will then discuss the loss
surface of half-rectified neural networks. Despite defining a nonconvex
objective, we will show that by increasing the size of the model, one can
again tradeoff complexity with computation, in the sense that the landscape
becomes asymptotically free of poor local minima.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cs.princeton.edu/pipermail/ml-stat-talks/attachments/20170310/cb0c2681/attachment.html>

More information about the Ml-stat-talks mailing list