Sublinear Optimization for Machine Learning
Elad Hazan, Technion, Israel Institute of Technology
Thursday, April 3, 4:30pm
Computer Science 105
In many modern optimization problems, particularly those arising
in machine learning, the amount data is too large to apply
standard convex optimization methods. We'll discuss new optimization
algorithms that make use of randomization to prune the data produce a
correct solution albeit running in time which is smaller than the
data representation, i.e. sublinear running time. We'll present
such sublinear-time algorithms for linear classification, support
vector machine training, semi-definite programming and other
optimization problems. These new algorithms are based on a primal-dual
approach, and use a combination of novel sampling techniques and the
randomized implementation of online learning algorithms. We'll
describe information-theoretic lower bounds that show our running times
to be nearly best possible in the unit-cost RAM model.
The talk will be self contained - no prior knowledge in convex optimization or machine learning is assumed.
Elad Hazan is an associate professor of operations research at
the Technion, Israel Institute of Technology. His main research area
is machine learning and its relationship to optimization, game theory
and computational complexity. Elad received his Ph.D. in computer
science from Princeton University. He is the recipient of several best
paper awards including the Goldberg Best Paper award (twice), and
the European Research Council starting grant.