Rajesh Ranganath will present his research seminar/general exam on Tuesday May 21 at 10AM in Room 402. The members of his committee are: David Blei (advisor), Rob Schapire, and Philippe Rigollet (ORF). Everyone is invited to attend his talk and those faculty wishing to remain for the oral exam following are welcome to do so. His abstract and reading list follow below. ---------------------- Title: An Adaptive Learning Rate for Stochastic Variational Inference Abstract: Stochastic variational inference finds good pos- terior approximations of probabilistic models with very large data sets. It optimizes the variational objective with stochastic optimization, following noisy estimates of the natural gradient. Operationally, stochastic inference iteratively subsamples from the data, analyzes the subsample, and updates parameters with a decreasing learning rate. However, the algorithm is sensitive to that rate, which usually requires hand-tuning to each application. We solve this problem by developing an adaptive learning rate for stochastic inference. Our method requires no tuning and is easily implemented with computations already made in the algorithm. We demonstrate our approach with latent Dirichlet allocation applied to three large text corpora. Inference with the adaptive learning rate converges faster and to a better approximation than the best settings of hand-tuned rates. This is joint work with Chong Wang, Dave Blei, and Eric Xing. Reading List: Textbook Chapters Pattern Recognition & Machine Learning, Chapters 8, 10, 11 (C. Bishop) Probability & Stochastics, Chapter 6 (E. Cinlar.) Artificial Intelligence: A Modern Approach 3rd edition, Chapters 17, 18, 20, 21 (Russel + Norvig) Papers: Integration of Early Physiological Responses Predicts Later Illness Severity in Preterm Infants [Sci Transl Med 8 September 2010] (S. Saria et al.) Learning individual and population level traits from Clinical Temporal data [NIPS 2010] (S. Saria et al.) {Posterior Predictive Assessment of Model Fitness Via Realized Discrepancies [Statistica Sinica 1996] (A. Gelman, X. Meng, H. Stern) Stochastic Variation Inference [JMLR to appear] (M. Hoffman et al) A Stochastic Gradient Method with an Exponential Convergence Rate for Strongly-Convex Optimization with Finite Training Sets. [NIPS 2012] (N. LeRoux et al) Bootstrap Methods for Standard Errors, Confidence Intervals, and Other Measures of Statistical Accuracy [Statistical Science 1986] (B. Efron, R. Tibishirani) The Indian Buffet Process: An Introduction and Review [JMLR 2011] (T. Griffiths, Z. Gharamani)
participants (1)
-
Melissa M. Lawson