[Ml-stat-talks] Two courses on statistical learning at ORFE this fall

Philippe Rigollet rigollet at princeton.edu
Wed Sep 14 14:02:56 EDT 2011

Hi All,

I am very pleased to announce two new courses on statistical learning that will be offered this Fall semester as part of the ORFE graduate curriculum.

#The first course on "Online learning" will be taught by Sebastien Bubeck who arrived this year as an assistant professor in ORFE. Sebastien is an expert on bandit problems and online learning in general. Online learning has come to take a preponderant place in statistical learning theory over the past five years as illustrated by the schedule of the last COLT (Conference On Learning Theory). Such problems are not only very interesting mathematically but also find topical applications in web advertising and other sequential decision problems.

ORF 570. Introduction to Online Learning.
Instructor: Sebastien Bubeck
Course webpage: http://www.princeton.edu/~sbubeck/orf570.html
Time: Monday and Wednesday, 3:00-4:20pm.
Location: Sherrerd Hall classroom 101

This course presents an alternative approach to (sequential) forecasting problems. The core idea is to design strategies that work without any probabilistic assumption on the data-generating mechanism. These new methods can be applied in a great variety of settings, including sequential investment in the stock market, sequential pattern analysis, dynamic pricing and online linear optimization.
Moreover, on the mathematical level, this new theory gives the opportunity to study important notions that can be useful in completely different topics than forecasting, in particular: simple concentration inequalities, basic results from information theory, as well as important concepts from game theory and convex optimization.

#The second course on "Concentration of measure" is taught weekly by Ramon van Handel as part of the Stochastic Analysis Seminar. Concentration inequalities are virtually inevitable to obtain generalization error bounds in statistical learning theory. This course will not only establish the most fundamental concentration inequalities but will also introduce more advanced concepts that can be explained by the general phenomenon of concentration of measure. This course should be of interest to anyone interested in the generalization ability of machine learning algorithms.

ORF 557. Concentration of Measure.
Instructor: Ramon van Handel
Course webpage: http://orfe.princeton.edu/sas
Time:  Thursdays 4:30-5:30 pm. (The first lecture is on September 15)
Location: Bendheim Center classroom 103
Prerequisites: Probability at the level of ORF 526 is assumed.

The law of large numbers states that the average of many independent
random variables is close to its expectation. It turns out that this
simple fact is a special case of a much more general phenomenon that could
be informally phrased as follows: "a function of many independent random
variables, that does not depend too much on any one of them, is nearly
constant". This idea, called concentration of measure, appears in many
different areas of probability and its applications, and there exists a
powerful set of tools to establish that such properties hold in a precise
quantitative sense (in the form of sharp nonasymptotic estimates on the
deviation of the random variable of interest from its expectation or

The goal of these informal lectures is to provide a basic introduction to
such methods. Potential topics include Chernoff bounds, Hoeffding,
Bernstein and Azuma inequalities, bounded differences, Gaussian
concentration, isoperimetry, log-Sobolev inequalities, transportation cost
inequalities, Talagrand's concentration inequalities, and some random

Philippe Rigollet

More information about the Ml-stat-talks mailing list