[Topic-models] Help build an evaluated topic dataset

Swapnil Hingmire swapnil.hingmire at tcs.com
Mon Jun 20 04:04:44 EDT 2016


Hi,

I have a few doubts regarding human evaluation of topic models.

Comparing two or more models:
    Let us assume that, we have inferred two different topic models (M1 and M2) on the same corpus. As an example, let M1 denotes vanilla LDA and M2 denotes correspondence LDA (Corr-LDA). Now I would like to compare which model has inferred more coherent topics. Following are my doubts:
    1. How many topics of each model should be displayed to the user? (Let us say, we have inferred 100 topics on the NIPS corpus using both M1 and M2, should we show all the 100 topics of each model?)
    2. As mentioned by David, we are using coherence scale from 1 to 5. How to aggregate coherence of topics inferred by M1 (or M2) to come up with coherence score of M1 (or M2)?
    3. How can we say that topic inferred by M2 are "significantly" coherent than M1? (or vice-versa)

Request you to have discussion on these doubts.


Thanks and Regards,
Swapnil  Hingmire
 
 

-----topic-models-bounces at lists.cs.princeton.edu wrote: -----
To: "topic-models at lists.cs.princeton.edu" <topic-models at lists.cs.princeton.edu>
From: David Mimno 
Sent by: topic-models-bounces at lists.cs.princeton.edu
Date: 06/09/2016 08:59PM
Subject: [Topic-models] Help build an evaluated topic dataset

We need more examples of human-evaluated topic models. I trained a 50-topic model on questions and answers from the CrossValidated site, http://stats.stackexchange.com/. These are available freely from archive.org. Evaluate the topics here:

http://goo.gl/forms/EJRfg5vSFMF4Wc7u1

(Can you find the topic modeling topic?)

If I get enough non-troll responses, I'll post the documents, the Mallet state file, and the response spreadsheet on a github repo.

To create this form I went to http://scripts.google.com and used this code:

function createForm() {

var form = FormApp.create('Topic Coherence')
.setDescription("Each list of terms represents a topic. Evaluate each topic's coherence on a scale from 1 to 5. Does a topic contain terms that you would expect to see together on a page? Does it contain terms that would work together as search queries? Could you easily think of a short descriptive label? A Coherent topic (5) should be clear, consistent, and readily interpretable. A Problematic topic (3) should have some related words but might merge two unrelated concepts or contain several off-topic words. A Useless topic (1) should have no obvious connection between more than two or three words.");

var topics = ["time series data model trend noise signal period change seasonal autocorrelation level arima structure analysis process spatial trends frequency lag",
...,
"distribution random normal distributions variables variance independent variable distributed sigma probability gaussian poisson case uniform process theorem function mixture sample"];

topics.forEach(function (topic) {
  form.addScaleItem()
    .setTitle(topic)
    .setBounds(1, 5)
    .setLabels("Useless", "Coherent");
});

Logger.log('Published URL: ' + form.getPublishedUrl());
Logger.log('Editor URL: ' + form.getEditUrl());

} 
_______________________________________________
Topic-models mailing list
Topic-models at lists.cs.princeton.edu
https://lists.cs.princeton.edu/mailman/listinfo/topic-models
=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain 
confidential or privileged information. If you are 
not the intended recipient, any dissemination, use, 
review, distribution, printing or copying of the 
information contained in this e-mail message 
and/or attachments to it are strictly prohibited. If 
you have received this communication in error, 
please notify us by reply e-mail or telephone and 
immediately and permanently delete the message 
and any attachments. Thank you


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.cs.princeton.edu/pipermail/topic-models/attachments/20160620/bd7846b3/attachment.html>


More information about the Topic-models mailing list