[Ml-stat-talks] Fwd: [talks] Rajesh Ranganath will present his FPO, "Black Box Variational Inference: Scalable, Generic Bayesian Computation and its Applications" on Friday, 10/6/2017 at 11am in CS 105.
bee at princeton.edu
Fri Oct 6 08:37:44 EDT 2017
Talk of interest today at 11am.
---------- Forwarded message ----------
Rajesh Ranganath will present his FPO, "Black Box Variational Inference:
Scalable, Generic Bayesian Computation and its Applications" on Friday,
10/6/2017 at 11:00 in CS 105.
The members of his committee are as follows: David Blei (Adviser);
Examiners: Sanjeev Arora, Barbara Engelhardt, and David Blei; Non
Examiners: David Blei and Peter Orbanz (Columbia University, Department of
A copy of his thesis, is available in Room 310.
Everyone is invited to attend his talk. The talk abstract follows below.
Probabilistic generative models are robust to noise, uncover unseen
patterns, and make predictions
about the future. These models have been used successfully to solve
neuroscience, astrophysics, genetics, and medicine. The main computational
computing the hidden structure given the data—posterior inference. For most
interest, computing the posterior distribution requires approximations like
Variational inference transforms posterior inference into optimization.
this optimization problem was feasible to deploy in only a small fraction
This thesis develops black box variational inference. Black box variational
a variational inference algorithm that is easy to deploy on a broad class
of models and has
already found use in models for neuroscience and health care. It makes new
kinds of models
possible, ones that were too unruly for previous inference methods.
One set of models we develop is deep exponential families. Deep exponential
uncover new kinds of hidden pattens while being predictive of future data.
models are deep exponential families. Black box variational inference makes
it possible for
us to quickly study a broad range of deep exponential families with minimal
for each new type of deep exponential family.
The ideas around black box variational inference also facilitate new kinds
methods. First, we develop hierarchical variational models. Hierarchical
improve the approximation quality of variational inference by building
approximations from coarser ones. We show that they help with inference in
families. Second, we introduce operator variational inference. Operator
inference delves into the possible distance measures that can be used for
the variational optimization
problem. We show that this formulation categorizes various variational
methods and enables variational approximations without tractable densities.
By developing black box variational inference, we have opened doors to new
posterior approximations, and new varieties of variational inference
talks mailing list
talks at lists.cs.princeton.edu
To edit subscription settings or remove yourself, use this link:
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Ml-stat-talks