Aller au contenu

Workshop #3: The Future of Random Matrices

Photo de Igor Carron
Hosted By
Igor C.
Workshop #3: The Future of Random Matrices

Détails

The full program will be provided later. In the meantime, we have:

Remi Gribonval, INRIA
Titre: Differentially Private Compressive Learning - Large-scale learning with the memory of a goldfish

Abstract: Inspired by compressive sensing, Compressive Statistical Learning allows drastic volume and dimension reduction when learning from large/distributed/streamed data collections. The principle is to exploit random projections to compute a low‐dimensional (nonlinear) sketch (a vector of random empirical generalized moments), in essentially one pass on the training collection. Sketches of controlled size have been shown to capture the information relevant to the certain learning task such as unsupervised clustering, Gaussian mixture modeling or PCA. As a proof of concept, more than a thousand hours of speech recordings can be distilled to a sketch of only a few kilo‐bytes capturing enough information to estimate a Gaussian Mixture Model for speaker verification. The talk will highlight the main features of this framework, including statistical learning guarantees and differential privacy.

Joint work with Antoine Chatalic (IRISA, Rennes), Vincent Schellekens & Laurent Jacques (Univ Louvain, Belgium), Florimond Houssiau & Yves-Alexandre de Montjoye (Imperial College, London, UK), Nicolas Keriven (ENS Paris), Yann Traonmilin (Univ Bordeaux) Gilles Blanchard (IHES)

Alessandro Rudi, INRIA
Title: Scaling-up Large Scale Kernel Learning
Abstract: TBA

Julien Launay, LightOn
“Beyond backpropagation: alternative training methods for neural networks”

Abstract — Backpropagation has long been the de facto choice for training neural networks. Modern paradigms are implicitly optimized for it, and numerous guidelines exist to ensure its proper use. Yet, it is not without flaws: from forbidding effective parallelisation of the backward pass, to a lack of biological realism, issues abound. This has motivated the development of numerous alternative methods, most of which have failed to scale-up past toy problems like MNIST or CIFAR-10.
In this talk, we explore some recently developed training algorithms, and try to explain why they have failed to match the gold standard that is backpropagation. In particular, we focus on feedback alignment methods, and demonstrate a path to a better understanding of their underlying mechanics.

Photo of LightOn Artificial Intelligence meetup group
LightOn Artificial Intelligence meetup
Afficher d'autres événements
IPGG - PC'UP
6 rue jean calvin · Paris