Ricardo Monti from Imperial College London will presenting Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers by Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato and Jonathan Eckstein.
Doors open at 6:30 pm; the presentation will begin at 7:00 pm.
We hope that you'll take a look at the paper (http://stanford.edu/~boyd/papers/pdf/admm_distr_stats.pdf) before the Meetup (and if you don't, no worries).
Many modern statistics and machine learning problems can be recast as optimization problems where we look to minimize a convex objective function.
The resulting objective function enforces a set of desirable properties which we wish our answer to possess: a popular example would be sparsity (i.e., we wish to obtain a sparse answer for reasons of interpretability or for other reasons).
However, it is often the case that minimizing such objective functions is difficult. While there have been many solutions proposed in the literature, each of these tends to focus on a specific problem.
The proposed paper describes a powerful optimization algorithm called the Alternating Directions Method of Multipliers (ADMM) - a conceptually simple method with which to solve a wide range of convex optimization problems in an "off-the-shelf" manner.
The ADMM algorithm works by exploiting the separable structure of many objective functions. By splitting the objective into a likelihood term and a penalization term, it is able to iteratively minimize the objective function with respect to each. As it turns out, each of these optimization steps is usually much simpler than the original problem (and in many cases has a closed form solution!). As a result, complex, high-dimensional problems can be broken down into a series of simple, 1-dimensional problems.