align-toparrow-leftarrow-rightbackbellblockcalendarcamerachatcheckchevron-downchevron-leftchevron-rightchevron-small-downchevron-small-leftchevron-small-rightchevron-small-upchevron-upcircle-with-crosscrosseditfacebookglobegoogleimagesinstagramlocation-pinmagnifying-glassmailmoremuplabelShape 3 + Rectangle 1outlookpersonplusImported LayersImported LayersImported Layersshieldstartwitteryahoo

Stan: A Bayesian Directed Graphical Model Compiler

  • Nov 14, 2012 · 6:30 PM

Matt Hoffman will be presenting "Stan: A Bayesian Directed Graphical Model Compiler"

 

About the talk:

Matt will present an overview of Stan, a general compiler for Bayesian directed graphical models. Users provide a model definition and specify some variables as data. Stan then draws samples of the unknown variables from the posterior distribution, which may be used for full Bayesian inference.

Stan provides an extension of the BUGS graphical modeling language. Rather than being interpreted, it is compiled to C++ for both speed and scalability.

Inference is performed using an adaptive form of Hamiltonian Monte Carlo (HMC), a Markov chain Monte Carlo (MCMC) method based on gradients of the log probability function. Gradients are calculated automatically using reverse-mode algorithmic differentiation. HMC's main advantage is being able to sample from densities with highly correlated and/or constrained variables (such as logistic regressions with correlated features or multilevel time series with covariance structure), which cause simpler methods like Gibbs sampling or random-walk Metropolis-Hastings to grind to a halt.

I'll start with an overview of the principles of Bayesian inference, including the smooth transition from supervised to semi-supervised to unsupervised models.

I'll supply examples of Stan's modeling language, describe HMC and our adaptive sampler at a high level, explain how reverse-mode algorithmic differentiation works, how we transform constrained parameters (like simplexes or covariance matrices), and how we can marginalize out discrete parameters directly in the modeling language.

I'll also discuss convergence monitoring, calculating effective sample
sizes in MCMC and standard errors of MCMC estimates, as well as some
automatic testing methods for probabilistic models.

Stan is available from http://mc-stan.org under the new BSD license.

Joint work with: Andrew Gelman, Bob Carpenter, Daniel Lee, Wei Wang,
Jiqiang Guo, Ben Goodrich, and Michael Malecki.

 

About the speaker:

Matt Hoffman is a research scientist in the Creative Technologies Laboratory at Adobe. Before that, he was a postdoc working with Prof. Andrew Gelman in the Statistics Department at Columbia University. He did his Ph.D. at Princeton University in Computer Science working in the Sound Lab with Prof. Perry Cook and Prof. David Blei. His research interests include developing efficient Bayesian (and pseudo-Bayesian) inference algorithms; hierarchical probabilistic modeling of audio, text, and marketing data; audio feature extraction, music information retrieval, and the application of music information retrieval and modeling techniques to musical synthesis.

 

 

Meetup Schedule:

6:30-6:45 -- socializing

6:45-7:00 -- lightning talks

7:00-8:00 -- main presentation

8:00-8:30 -- socializing

 

Join or login to comment.

Our Sponsors

People in this
Meetup are also in:

Sign up

Meetup members, Log in

By clicking "Sign up" or "Sign up using Facebook", you confirm that you accept our Terms of Service & Privacy Policy