-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Latent Dirichlet Allocation
Latent Dirichlet Allocation (also called LDA, see http://en.wikipedia.org/wiki/Latent_Dirichlet_allocation) is
"a generative model that allows sets of observations to be explained by unobserved groups that explain why some parts of the data are similar. For example, if observations are words collected into documents, it posits that each document is a mixture of a small number of topics and that each word's creation is attributable to one of the document's topics."
Matt Hoffman published in 2010 a way to perform LDA "online", moving towards a solution in small batches. See http://www.cs.princeton.edu/~mdhoffma/. He made code available in Python, and also wrote it into Vowpal.
This video tutorial is useful, but refers to the 5.0 version: http://videolectures.net/nipsworkshops2010_langford_vow/
This tutorial is similar, but with more up to date command-line arguments: https://github.com/JohnLangford/vowpal_wabbit/wiki/lda.pdf
The utl
directory now has a python utility called vw-lda
which makes interacting with vw
LDA mode much easier. utl/vw-lda
does all the necessary pre-processing to convert documents to vw --lda
format, runs vw --lda
with good default parameters (which you may optionally override from the command-line) and finally post-processes the results to print all topics in human-readable format and in order of importance as its output. Credit: Chetan Ganjihal.
The C++ implementation defines the following parameters here:
Latent Dirichlet Allocation:
--lda arg Run lda with <int> topics
--lda_alpha arg (=0.100000001) Prior on sparsity of per-document topic
weights
--lda_rho arg (=0.100000001) Prior on sparsity of topic
distributions
--lda_D arg (=10000) Number of documents
--lda_epsilon arg (=0.00100000005) Loop convergence threshold
--minibatch arg (=1) Minibatch size, for LDA
--math-mode arg (=0) Math mode: simd, accuracy, fast-approx
--metrics arg (=0) Compute metrics
Run vw -h --lda 1
for looking through the man page or directly jump into the source code of vw utility.
VW can output the topic model in a human-readable textual form, if the --readable_model
command line parameter is specified while fitting the model. The output format is the following:
- the file starts with a preamble (currently 10 lines) that describes VW version info and model fitting parameters
- each line corresponds to a model dictionary word and is prefixed with word ID. The number of lines is dictated by the choice of feature table size (
-b
) - columns 2-n represent the per-word topic distributions. The number of topics is specified using the
--lda
parameter.
The audit output from VW for LDA may include interaction terms between (i.e. "feat1^feat2:[hash]\t[topic1-weight]\t[topic2-weight]..."), which make it difficult to attribute a single feature to a single topic & feature weight. In this case, you may want to include a dummy namespace after the bar in the training data.
- Home
- First Steps
- Input
- Command line arguments
- Model saving and loading
- Controlling VW's output
- Audit
- Algorithm details
- Awesome Vowpal Wabbit
- Learning algorithm
- Learning to Search subsystem
- Loss functions
- What is a learner?
- Docker image
- Model merging
- Evaluation of exploration algorithms
- Reductions
- Contextual Bandit algorithms
- Contextual Bandit Exploration with SquareCB
- Contextual Bandit Zeroth Order Optimization
- Conditional Contextual Bandit
- Slates
- CATS, CATS-pdf for Continuous Actions
- Automl
- Epsilon Decay
- Warm starting contextual bandits
- Efficient Second Order Online Learning
- Latent Dirichlet Allocation
- VW Reductions Workflows
- Interaction Grounded Learning
- CB with Large Action Spaces
- CB with Graph Feedback
- FreeGrad
- Marginal
- Active Learning
- Eigen Memory Trees (EMT)
- Element-wise interaction
- Bindings
-
Examples
- Logged Contextual Bandit example
- One Against All (oaa) multi class example
- Weighted All Pairs (wap) multi class example
- Cost Sensitive One Against All (csoaa) multi class example
- Multiclass classification
- Error Correcting Tournament (ect) multi class example
- Malicious URL example
- Daemon example
- Matrix factorization example
- Rcv1 example
- Truncated gradient descent example
- Scripts
- Implement your own joint prediction model
- Predicting probabilities
- murmur2 vs murmur3
- Weight vector
- Matching Label and Prediction Types Between Reductions
- Zhen's Presentation Slides on enhancements to vw
- EZExample Archive
- Design Documents
- Contribute: