[talks] Andrej Risteski General Exam: Monday, May 12, 2014 at 2pm in Room 302

Nicki Gotsis ngotsis at CS.Princeton.EDU
Mon May 5 14:31:34 EDT 2014


Andrej Risteski will present his General Exam on Monday, May 12, 2014 at 2pm in Room 302.  The members of his committee are Sanjeev Arora (advisor), Moses Charikar, David Blei.
Everyone is invited to attend his talk, and those faculty wishing to remain for the oral exam following are welcome to do so.  His abstract and reading list follow below.

Abstract: 
Some simple provably convergent cases for variational inference 

Variational inference is a fairly commonly used tool in machine learning to do inference in graphical models. The way they work is to try to reduce the problem of calculating a marginal probability to some suitable, usually non-convex, mathematical program. More precisely, one assumes the marginal distribution belongs to some well-behaved/tractable family of distributions 
by assuming independence between some variables that are actually not independent. Then, one tries to find the best distribution in this family that matches the marginal we are trying to find. 

Unfortunately, they rarely come with any guarantees. First, it's difficult to know how much one looses by assuming the independencies, and solving the non-convex program is usually done with some gradient descent-like procedure, so it's unclear how good of a solution we will get. 

I will discuss some simple cases where one can prove convergence to the right value in the setting of topic modeling. With some suitably (strong) assumptions on the model, and some (unfortunately also strong) initializations, we will prove variational inference works. We'll complement this with some experimentally constructed (natural) counter-examples where dropping these assumptions causes us to not converge to the right value anymore (at least with the vanilla methods we are analyzing). While the assumptions we are making are arguably not very realistic, they are a step forward in analyzing variational inference in more interesting settings. 


Reading list: 
0. Algorithm Design; Kleinberg, Tardos 
1. Graphical Models, Exponential Families, and Variational Inference; Jordan, Wainwright [Chapters 1,2,3,4,7,9] 
2. Latent Dirichlet Allocation; Blei, Ng, Jordan 
3. Learning Topic Models - Going beyond SVD; Arora, Moitra 
4. A Practical Algorithm for Topic Modeling with Provable Guarantees; Arora et al. 
5. Computing a Nonnegative Matrix Factorization --- Provably; Arora, Kannan, Moitra 
6. Complexity of Inference in Graphical Models; Chandresekaran, Srebro, Harsha 
7. Understanding Belief Propagation and its Generalizations; Yedidia, Freeman, Weiss 
8. Convexity Arguments for Efficient Minimization of the Bethe and Kikuchi Free Energies; Heskes 
9. Convexifying the Bethe Free Energy; Meshi, Jaimovich, Globerson, Friedman 
10. Counting independent sets up to the tree threshold; Weitz 
11. Counting Independent Sets Using the Bethe Approximation; Chandrasekaran, Chertkov, Gamarnik, Shah, Shin 
12. Clustering with Spectral Norm and the k-means Algorithm; Kumar, Kannan 



More information about the talks mailing list