[Ml-stat-talks] Alex Slivkins' talk and visit
sbubeck at Princeton.EDU
Wed Feb 19 12:21:52 EST 2014
Alex Slivkins will be visiting on Friday 21st. He is available for meetings on Friday afternoon. He will also give the following talk in the Wilks Statistics Seminar:
DATE: Friday 21st
LOCATION: Sherrerd Hall 101
SPEAKER: Alex Slivkins, MSR
TITLE: Bandits with Knapsacks
ABSTRACT: Multi-armed bandit problems are the predominant theoretical model of exploration-exploitation tradeoffs in machine learning, and they have countless applications ranging from medical trials, to communication networks, to Web search and advertising, to dynamic pricing. In many of these application domains the learner may be constrained by one or more supply (or budget) limits, in addition to the customary limitation on the time horizon. The literature lacks a general model encompassing these sorts of problems. We introduce such a model, called ``bandits with knapsacks'', that combines aspects of stochastic integer programming with online learning. A distinctive feature of our problem, in comparison to the existing regret-minimization literature, is that the optimal policy for a given latent distribution may significantly outperform the policy that plays the optimal fixed arm. Consequently, achieving sublinear regret in the bandits-with-knapsacks problem is significantly more challenging than in conventional bandit problems.
We present two algorithms whose reward is close to the information-theoretic optimum: one is based on a novel ``balanced exploration'' paradigm, while the other is a primal-dual algorithm that uses multiplicative updates. Further, we prove that the regret achieved by both algorithms is optimal up to polylogarithmic factors.
Joint work with Robert Kleinberg and Ashwinkumar Badanidiyuru. Appeared at FOCS 2013.
Full version can be found here<http://arxiv.org/abs/1305.2545>.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Ml-stat-talks