<html><head><style type='text/css'>p { margin: 0; }</style></head><body><div style='font-family: arial,helvetica,sans-serif; font-size: 10pt; color: #000000'>Rong Ge will present his preFPO on Monday March 11 at Noon in Room 401 (note room!)<br>The members of his committee are: Sanjeev Arora, advisor; Moses Charikar and Sham <br>Kakade (MSR New England), readers; Mark Braverman and Zeev Dvir, nonreaders. <br>Everyone is invited to attend his talk. His abstract follows below.<br>---------------------------<br>Title: Provable Algorithms for Machine Learning <br><div><br></div><div><div style="font-family:arial,sans-serif;font-size:12.727272033691406px">Many
tasks in machine learning are computationally intractable.
Nevertheless, researchers have developed <br>heuristic algorithms that
successfully solve these problems in practice. This phenomena can be
explained using<br>average case complexity. However, in many situations the
details supporting this explanation are still missing: what <br>properties
of the input make it possible to be solved efficiently? What is the
chance that real-life inputs satisfy <br>these properties and how robust is
the algorithm if they don't?</div>
<div style="font-family:arial,sans-serif;font-size:12.727272033691406px"><br></div><div style="font-family:arial,sans-serif;font-size:12.727272033691406px">In
this talk I use topic modeling as an example to show our attempts of
answering these problems. The first <br>part of the talk gives a new
algorithm for learning topic models with provable guarantees. This
algorithm <br>relies on a reasonable simplifying assumption, and runs 20
times faster than existing software like mallet <br>with comparable output
quality. The second part of the talk gives a new analysis for method of
moments <br>algorithms (AHK12, AFHKL12) using tensor decomposition. The new
algorithm has a better perturbation <br>analysis and can therefore tolerate
more modeling error.</div>
</div><br><br></div></body></html>