[talks] Tengyu Ma general exam

Melissa M. Lawson mml at CS.Princeton.EDU
Fri Jan 17 09:11:32 EST 2014


Tengyu Ma will present his research seminar/general exam on Wednesday January 22 at 
2PM in Room 301 (note room!). The members of his committee are: Sanjeev Arora (advisor), 
Moses Charikar, and David Blei. Everyone is invited to attend his talk, and those faculty 
wishing to remain for the oral exam following are welcome to do so. His abstract and 
reading list follow below. 

----- Original Message -----




abstract 


I will mainly talk about our recent paper on deep learning provable bound for learning some deep representations. Deep learning, a modern variant of artificial neural net, has been a very powerful tool in recent years for many machine learning tasks such as image recognition, speech recognition and natural language processing. Variations of However, theoretically, it still remains mysterious why these modern heuristics and tricks together with millions of data result in big improvements in so many important questions. With the motivation to resolve these mysteries, this paper proposes a generative model for deep network, and provides an provable algorithm that can learn all the parameters and hidden variables unsupervisedly using polynomial time and polynomial samples generated from this network under some assumptions 


In more detail, the network we study has constant number of layers, each consecutive two layers are connected via a random sparse bipartite graph/matrix. The observed data at bottom are generated as follows: First a sparse hidden vector at the top layer is chosen, then the top layer activates some of the second layer nodes by multiplying the connection matrix and applying another point-wise threshold function, and then adding some small noise. Then the second layer activates the third using the same rules with the new connection matrix, so on and so forth. Our algorithms exploits the old Hebbian rule: "Things fire together wire together", that is, if two nodes in the same layer are correlated, then they are likely to connect to the same ancestor in the layer above them. This is the joint work with Sanjeev Arora, Aditya Bhaskara and Rong Ge. 
Reading list: 
0. Algorithm Design, Jon Kleinberg, Éva Tardos 
1. Ankur Moitra's lecture notes for the course Algorithmic Aspects of Machine Learning 
http://people.csail.mit.edu/moitra/996.html 
2. Daniel Hsu's lecture notes for COMS 4772 Advanced Machine Learning (Fall 2013) 
http://www.cs.columbia.edu/~djhsu/4772/ 
3. Machine Learning: A Probabilistic Perspective By Kevin P. Murphy (provided I could find it in library) 
http://mitpress.mit.edu/sites/default/files/titles/content/9780262018029_toc_0001.pdf 
4. Representation Learning: A Review and New Perspectives, by Yoshua Bengio, Aaron Courville, Pascal Vincent 
5. Visualizing and Understanding Convolutional Networks http://arxiv.org/abs/1311.2901 
6. ImageNet Classification with Deep Convolutional Neural Networks by Krizhevsky, A., Sutskever, I. and Hinton, G. E. 
7. Generalized Denoising Auto-Encoders as Generative Models Bengio et al. 
http://arxiv.org/abs/1305.6663 
8. On the Provable Convergence of Alternating Minimization for Matrix Completion 
9. Tensor Decompositions for Learning Latent Variable Models by Anandkumar, Ge, Hsu, Kakade, Telgarsky 
10. A. Kumar, R. Kannan, Clustering with Spectral Norm and the k-Means Algorithm 
11. V. Chandrasekaran, P. Parrilo, A. Willsky, Latent Variable Graphical Model Selection via Convex Optimization 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cs.princeton.edu/pipermail/talks/attachments/20140117/324affa4/attachment.htm>


More information about the talks mailing list