[Ml-stat-talks] Wed 8/4: Forward Search Planning for Large POMDPs

David Blei blei at CS.Princeton.EDU
Mon Aug 2 13:33:01 EDT 2010


On Wednesday, August 4, Emma Brunskill from EECS at U.C. Berkeley will
be giving a talk
on POMPDs.  The talk will be at 11:30am in Room 125 of Sherrerd Hall.

Forward Search Planning for Large POMDPs
Emma Brunskill
Department of Electrical Engineering and Computer Science
U.C. Berkeley

There has recently been significant interest in the artificial
intelligence community in leveraging forward search approaches to
tackle planning in partially observable Markov decision processes
(POMDPs). Along with my collaborators, I have helped develop new
forward search methods to scale  to large POMDP planning problems
which often require multi-step lookaheads to achieve good performance.
In this talk I will discuss three recent projects along this effort.
In the first we present an analytic method for computing the
distribution of belief states possible after an open-loop action
sequence, and demonstrate how this allows us to consider a longer
horizon, though restricted, policy space during planning. In the
second project we consider a simple method for automatically
constructing open-loop action sequences, and use this approach within
an anytime forward-search planner that is guaranteed under certain
conditions to converge to an epsilon-optimal policy, given sufficient
time. The success of these two approaches will be motivated by
simulated and real-world results (for one approach) on a robotic
helicopter monitoring problem.  Finally I will also briefly discuss
leveraging particle filters within a forward search approach,
motivated by an automated teaching task.

This is joint work with Ruijie He, Anna Rafferty, Pat Shafto, Tom
Griffiths and Nicholas Roy.


Bio:

Emma Brunskill is a NSF Mathematical Sciences Postdoctoral fellow  at
UC Berkeley. She received a BS in Computer Engineering and
Physics from the University of Washington, a MSc in Neuroscience
from Oxford University as a Rhodes Scholar, and a PhD in Computer
Science from MIT.  Her research interests include
reinforcement learning, decision making under uncertainty, and using
information communication technologies for international development.


More information about the Ml-stat-talks mailing list