[Ml-stat-talks] jake abernethy

Robert Schapire schapire at CS.Princeton.EDU
Mon Apr 12 10:37:38 EDT 2010


Jake Abernethy from UC Berkeley is speaking TODAY (Monday) at 4:30PM in 
the Computer Science Small Auditorium (Room 105).

Rob



*Learning, Adversaries, and Limited Feedback*

*Jake Abernethy*
UC Berkeley

    A basic assumption often made in Machine Learning is that the data
    we observe are independent and identically distributed. But is this
    really necessary? Is it even realistic in typical scenarios? A
    number of recent results provide guarantees for arbitrary (or even
    adversarially-generated) data sequences. It turns out, for a wide
    class of problems, the learner's performance is no worse when the
    data is assumed to be arbitrary as opposed to i.i.d. This result
    involves a nice application of the Minimax theorem, which I'll
    briefly describe. I'll also delve into a more challenging problem:
    the "bandit" setting, in which the learner receives only very little
    feedback on each example. It was unknown for some time whether there
    exists an efficient algorithm that achieves the same guarantee for
    non-i.i.d. data for a general class of learning/decision problems.
    I'll present the first known such bandit algorithm, and I'll sketch
    the central ideas behind the technique, borrowing several tricks
    from the Interior Point optimization literature. 


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cs.princeton.edu/pipermail/ml-stat-talks/attachments/20100412/806ea613/attachment.htm>


More information about the Ml-stat-talks mailing list