[Ml-stat-talks] Fwd: [talks] Colloquium Speaker, Mon April 12- Jake Abernethy

David Blei david.blei at gmail.com
Wed Apr 7 18:54:25 EDT 2010


hi ml-stat talks---

there is (what looks to be) an excellent machine learning talk next
monday in the CS colloquium.

best
dave

---------- Forwarded message ----------
Learning, Adversaries, and Limited Feedback
Jake Abernethy, UC Berkeley
Monday, April 12, 2010
4:30pm
Small Auditorium, CS 105

A basic assumption often made in Machine Learning is that the data we
observe are independent and identically distributed. But is this
really necessary? Is it even realistic in typical scenarios? A number
of recent results provide guarantees for arbitrary (or even
adversarially-generated) data sequences. It turns out, for a wide
class of problems, the learner's performance is no worse when the data
is assumed to be arbitrary as opposed to i.i.d. This result involves a
nice application of the Minimax theorem, which I'll briefly describe.
I'll also delve into a more challenging problem: the "bandit" setting,
in which the learner receives only very little feedback on each
example. It was unknown for some time whether there exists an
efficient algorithm that achieves the same guarantee for non-i.i.d.
data for a general class of learning/decision problems. I'll present
the first known such bandit algorithm, and I'll sketch the central
ideas behind the technique, borrowing several
 tricks from the Interior Point optimization literature.
_______________________________________________
talks mailing list
talks at lists.cs.princeton.edu
https://lists.cs.princeton.edu/mailman/listinfo/talks


More information about the Ml-stat-talks mailing list