[Ml-stat-talks] Fwd: [talks] Rajesh Ranganath will present his FPO, "Black Box Variational Inference: Scalable, Generic Bayesian Computation and its Applications" on Friday, 10/6/2017 at 11am in CS 105.

Barbara Engelhardt bee at princeton.edu
Fri Oct 6 08:37:44 EDT 2017


Talk of interest today at 11am.


---------- Forwarded message ----------

Rajesh Ranganath will present his FPO, "Black Box Variational Inference:
Scalable, Generic Bayesian Computation and its Applications" on Friday,
10/6/2017 at 11:00 in CS 105.

The members of his committee are as follows: David Blei (Adviser);
Examiners: Sanjeev Arora, Barbara Engelhardt, and David Blei; Non
Examiners: David Blei and Peter Orbanz (Columbia University, Department of
Statistics)

A copy of his thesis, is available in Room 310.


Everyone is invited to attend his talk. The talk abstract follows below.

Probabilistic generative models are robust to noise, uncover unseen
patterns, and make predictions
about the future. These models have been used successfully to solve
problems in
neuroscience, astrophysics, genetics, and medicine. The main computational
challenge is
computing the hidden structure given the data—posterior inference. For most
models of
interest, computing the posterior distribution requires approximations like
variational inference.
Variational inference transforms posterior inference into optimization.
Classically,
this optimization problem was feasible to deploy in only a small fraction
of models.
This thesis develops black box variational inference. Black box variational
inference is
a variational inference algorithm that is easy to deploy on a broad class
of models and has
already found use in models for neuroscience and health care. It makes new
kinds of models
possible, ones that were too unruly for previous inference methods.
One set of models we develop is deep exponential families. Deep exponential
families
uncover new kinds of hidden pattens while being predictive of future data.
Many existing
models are deep exponential families. Black box variational inference makes
it possible for
us to quickly study a broad range of deep exponential families with minimal
added eort
for each new type of deep exponential family.

The ideas around black box variational inference also facilitate new kinds
of variational
methods. First, we develop hierarchical variational models. Hierarchical
variational models
improve the approximation quality of variational inference by building
higher-fidelity
approximations from coarser ones. We show that they help with inference in
deep exponential
families. Second, we introduce operator variational inference. Operator
variational
inference delves into the possible distance measures that can be used for
the variational optimization
problem. We show that this formulation categorizes various variational
inference
methods and enables variational approximations without tractable densities.

By developing black box variational inference, we have opened doors to new
models, better
posterior approximations, and new varieties of variational inference
algorithms.

_______________________________________________
talks mailing list
talks at lists.cs.princeton.edu
To edit subscription settings or remove yourself, use this link:
https://lists.cs.princeton.edu/mailman/listinfo/talks
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cs.princeton.edu/pipermail/ml-stat-talks/attachments/20171006/46791097/attachment.html>


More information about the Ml-stat-talks mailing list