[Ml-stat-talks] Wed: Tamir Hazan on random MAP perturbations

David Mimno mimno at CS.Princeton.EDU
Mon Oct 15 09:16:23 EDT 2012

Maximum a-posteriori point estimates are often the best we can do for
intractable models, but they aren't very satisfying. For this week's
Machine Learning talk, Tamir Hazan will discuss a new method for
learning about other probable solutions from point estimates.

Tamir Hazan, TTI-Chicago
CS 402, Wed Oct 17, 12:30

Title: Inference and Learning with Random Maximum A-Posteriori Perturbations

Learning and inference in complex models drives much of the research
in machine learning applications, from computer vision, natural
language processing, to computational biology. The inference problem
in such cases involves assessing the weights of possible structures,
whether objects, parsers, or molecular structures. Although it is
often feasible to only find the most likely or maximum a-posteriori
(MAP) assignment rather than considering all possible assignment, MAP
inference is limited when there are other likely assignments. In a
fully probabilistic treatment, all possible alternative assignments
are considered thus requiring summing over the assignments with their
respective weights  which is considerably harder (#P hard vs NP hard).
The main surprising result of our work is that MAP inference
(maximization) can be used to approximate and bound the weighted
counting. This leads us to a new approximate inference framework that
is based on MAP-statistics, thus does not depend on
pseudo-probabilities, contrasting the current framework of Bethe
approximations which lacks statistical meaning. This approach excels
in regimes where there are several but not exponentially many
prominent assignments. For example, this happens in cases where
observations carry strong signals (local evidence) but are also guided
by strong consistency constraints (couplings).

Joint work with Tommi Jaakkola

More information about the Ml-stat-talks mailing list