[Topic-models] EM vs Gibbs sampling
blei at CS.Princeton.EDU
Thu Sep 28 09:27:30 EDT 2006
another solution worth mentioning is full variational posterior
inference on the topics as well as topic proportions and topic
that is, rather than computing the MLE of the topic distributions
over words, compute variational approximations to them under a
dirichlet prior (as in the gibbs sampling set up).
i've found that this helps matters, particularly with smaller corpora.
On Sep 27, 2006, at 1:45 PM, Edo Airoldi wrote:
> Here are some comparison plots (by Wray Buntine) that illustrate
> differences in perplexity versus running time. They refer to
> discrete component analysis (DCA), which is another hierarchical
> Bayesian model of mixed membership.
> Edo Airoldi
> Machine Learning & Statistics
> School of Computer Science Office: (412) 268-7527
> Carnegie Mellon University Fax: (412) 268-1744
> 5000 Forbes Avenue e-mail: edo at cmu.edu
> Pittsburgh, PA 15213 web: www.cs.cmu.edu/
> On Sep 27, 2006, at 12:39 PM, Joel Reymont wrote:
>> Where can I read on the differences between variational EM (used in
>> the original Blei paper) and Gibbs sampling as applied to LDA?
>> What are the advantages of using sampling vs EM?
>> Thanks, Joel
>> Topic-models mailing list
>> Topic-models at lists.cs.princeton.edu
> Topic-models mailing list
> Topic-models at lists.cs.princeton.edu
More information about the Topic-models