Probabilistic inference, a widely used, mathematically rigorous approach for interpreting ambiguous information using models that are uncertain or incomplete, has become central to multiple fields, from big data analytics to robotics and AI to computational modeling of the mind and brain. Unfortunately, it currently requires deep technical expertise. Models and inference algorithms are difficult to communicate, design, implement, validate, and optimize, and inference often appears to be fundamentally intractable.
Probabilistic programming aims to solve these problems by making modeling and inference broadly accessible to nonexperts, especially by facilitating data analysis, enabling experts to tackle problems that are currently infeasible, especially in machine intelligence. Probabilistic programming is based on new formalizations of modeling and inference that bring together key ideas from probability theory, programming languages, and Turing-universal computation.
The current rise of Deep Probabilistic Machine Learning enables us to go beyond standard function approximation using deep neural networks. With the convergence of symbolic reasoning, Bayesian statistics, and deep learning, Deep Probabilistic Programming Frameworks like Edward (Standford), Pyro (Uber), Infer.net (Microsoft) enable us easily reason over complex models at scale.