Re: [Cleveland-AI-ML-support-group] Topic modeling w/ neural nets

From: Timmy W.
Sent on: Tuesday, November 8, 2011 12:27 PM
>  I haven't found any softmax code yet.

Andrew Ng has a Softmax Regression tutorial here:

http://ufldl.stan...­



On Mon, Nov 7, 2011 at 6:48 PM, Joe <[address removed]> wrote:
>
>  I tried Ruslan's Matlab code for RBM number recog in Octave and it worked. I used the directions on this page than ran 'demo' from octave:
> http://www.mit.ed...­
>
> Apparently most Matlab code works in Octave.
>
>  I haven't found any softmax code yet. The best bet looks like working on modifying the above.
>
> Here is a good short tutorial on graphical models and Bayes nets:
> http://www.cs.ubc...­
>
> Joe
>
>
>
> --- On Sun, 11/6/11, Timmy Wilson <[address removed]> wrote:
>
> From: Timmy Wilson <[address removed]>
> Subject: [Cleveland-AI-ML-sup­port-group] Topic modeling w/ neural nets
> To: [address removed]
> Date: Sunday, November 6, 2011, 12:42 PM
>
> Inspired by these two great talks:
>
> - Geoffrey Hinton -- The Next Generation of Neural Networks --
> http://www.youtub...­
>
> - Andrew Ng -- Unsupervised Feature Learning and Deep Learning --
> http://www.youtub...­
>
> i'm interested in using deep learning to model latent topics
>
> i did some digging, and found Ruslan Salakhutdinov's -- Replicated
> Softmax: an Undirected Topic Model --
> http://www.mit.ed...­
>
> "
> The model can be efficiently trained using Contrastive
> Divergence, it has a better way of dealing with documents
> of different lengths, and computing the posterior distribution
> over the latent topic values is easy. We will also demonstrate
> that the proposed model is able to generalize much better
> compared to a popular Bayesian mixture model, Latent
> Dirichlet Allocation (LDA) [2], in terms of both the
> log-probability on previously unseen documents and the
> retrieval accuracy.
> "
>
> and
>
> "
> The proposed model have several key advantages: the
> learning is easy and stable, it can model documents of
> different lengths, and computing the posterior distribution
> over the latent topic values is easy. Furthermore, using
> stochastic gradient descent, scaling up learning to billions
> of documents would not be particularly difficult.
> "
>
> i want to 'cobble together' a distributed python implementation --
> she'll feel right at home in http://radimrehur...­ -- if
> Radim will have her :]
>
> i figured i'd spam everyone that may be interested, and ask/plead for
> help/existing code examples
>
>
>
> --
> Please Note: If you hit "REPLY", your message will be sent to everyone on this mailing list ([address removed])
> http://www.meetup...­
> This message was sent by Timmy Wilson ([address removed]) from Cleveland AI + ML support group.
> To learn more about Timmy Wilson, visit his/her member profile: http://www.meetup...­
> To unsubscribe or to update your mailing list settings, click here: http://www.meetup...­
> Meetup, PO Box 4668 #37895 New York, New York[masked] | [address removed]
>
>
>
>
>
> --
> Please Note: If you hit "REPLY", your message will be sent to everyone on this mailing list ([address removed])
> This message was sent by Joe ([address removed]) from Cleveland AI + ML support group.
> To learn more about Joe, visit his/her member profile
> To unsubscribe or to update your mailing list settings, click here
>
> Meetup, PO Box 4668 #37895 New York, New York[masked] | [address removed]

Our Sponsors

People in this
Meetup are also in:

Sign up

Meetup members, Log in

By clicking "Sign up" or "Sign up using Facebook", you confirm that you accept our Terms of Service & Privacy Policy