[Ml-stat-talks] Ben Taskar on determinantal point processes

David Mimno mimno at CS.Princeton.EDU
Fri Mar 18 09:13:29 EDT 2011

When you look at truly random points, like the stars or flecks of dust on a monitor, it's not unusual to see points almost touching. But if you ask people to draw "random points", they'll probably have a slight bias against putting two dots right next to each other. What statistical distributions describe this kind of not-quite-randomness? For our next Machine Learning lecture, Ben Taskar from Penn will describe just such a model, and its surprisingly wide applications. [-DM]

Tues, Mar 22, 12:00. CS 302. Lunch!


Ben Taskar, University of Pennsylvania

Determinantal Point Processes: Representation, Inference and Learning

Determinantal point processes (DPPs) arise in random matrix theory
and quantum physics as models of random variables with negative
correlations. Among many remarkable properties, they offer tractable
algorithms for exact inference, including computing marginals,
computing certain conditional probabilities, and sampling.   DPPs
are a natural model for subset selection problems where diversity is
preferred.  For example, they can be used to select diverse sets of
sentences to form document summaries, or to return relevant but
varied text and image search results, or to detect non-overlapping
multiple object trajectories in video.   I'll present our recent work on
a novel factorization and dual representation of DPPs that enables
efficient inference for exponentially-sized structured sets. We develop
a new inference algorithm based on Newton identities for DPPs
conditioned on subset size. We also derive efficient parameter
estimation for DPPs from several types of observations.  I'll show
the advantages of the model on several natural language and vision
tasks: extractive document summarization, diversifying image search
results and multi-person articulated pose estimation problems in images.

Joint work with Alex Kulesza, University of Pennsylvania

Ben Taskar received his bachelor's and doctoral degree in Computer
Science from Stanford University. After a postdoc at the University of
California at Berkeley, he joined the faculty at the University of
Pennsylvania Computer and Information Science Department in 2007,
where he currently co-directs PRiML: Penn Research in Machine
Learning. His research interests include machine learning, natural
language processing and computer vision. He has been awarded the
Sloan Research Fellowship and selected for the Young Investigator
Program by the Office of Naval Research and the DARPA Computer
Science Study Group. His work on structured prediction has received
best paper awards at NIPS and EMNLP conferences.

More information about the Ml-stat-talks mailing list