[talks] Brian Bullins will present his general exam Monday, May 16, 2016 at 4pm in CS 402.
ngotsis at CS.Princeton.EDU
Mon May 9 11:06:09 EDT 2016
Brian Bullins will present his general exam Monday, May 16, 2016 at 4pm in CS 402.
The members of his committee are Elad Hazan (adviser), Sanjeev Arora, and Barbara Engelhardt.
Everyone is invited to attend his talk, and those faculty wishing to remain for the oral exam following are welcome to do so. His abstract and reading list follow below.
Linear regression is a common learning technique for determining the linear relationship between multiple variables and how they may affect a certain outcome. A standard example is that of medical diagnosis, whereby the data gathered for a given patient provides information about their susceptibility to certain illnesses. A major drawback to this process is the work necessary to collect the data, as it requires running numerous tests for each person. In such cases it may be necessary to impose limitations on the amount of data available for each example. For medical diagnosis, this might mean having each patient only undergo a subset of tests.
In this talk I will discuss the problem of regression where the learner is allowed to observe only a limited number of attributes per example, known as the limited attribute observation model. In particular, we provide the first lower bounds giving a limit on the precision attainable by any algorithm for several variants of regression, notably regression with absolute loss and squared loss, as well as for classification with hinge loss. We complement this with a general purpose algorithm which achieves an upper bound in terms of the precision limit.
This is joint work with Elad Hazan and Tomer Koren.
Introduction to Online Convex Optimization. Elad Hazan
Understanding Machine Learning: From Theory to Algorithms. Shai Shalev-Shwartz and Shai Ben-David
Linear Regression with Limited Observation. E. Hazan and T. Koren
Classification with Low Rank and Missing Data. E. Hazan, R. Livni, and Y. Mansour
Convergence rates of sub-sampled Newton methods. M. Erdogdu and A. Montanari
Accelerating Stochastic Gradient Descent using Predictive Variance Reduction. R. Johnson and T. Zhang
Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. J. Duchi, E. Hazan, and Y. Singer
Competing in the Dark: An Efficient Algorithm for Bandit Linear Optimization. J. Abernethy, E. Hazan, and A. Rakhlin
Cubic regularization of Newton method and its global performance. Y. Nesterov and B.T. Polyak
Deep learning via Hessian-free optimization. J. Martens
Escaping from Saddle Points — Online Stochastic Gradient Descent for Tensor Decomposition. R. Ge, F. Huang, C. Jin, and Y. Yuan
Online Learning of Eigenvectors. D. Garber, E. Hazan, and T. Ma
On the saddle point problem for non-convex optimization. R. Pascanu, Y. Dauphin, S. Ganguli, and Y. Bengio
More information about the talks