De quoi s'agit-il
Nous sommes développeurs et chercheurs avec un intérêt dans l'apprentissage automatique. Nous nous retrouverons pour discuter concrètement nos projets dans l'apprentissage automatique, réseau de neurones artificiels, modèles graphiques probabilistes, et traitement automatique du langage naturel.
-
We're developers and scientists interested in Machine Learning, Probabilistic Graphical Models, Neural networks, and Natural Language Processing. In this meetup, we'll bring together machine learning practitioners and researchers to listen to each other's work.
Événements à venir (1)
Tout voir- Diffusion models for generating images: algorithms and (a bit of) theoryLa Cantine Numérique - Coworking à Nantes, Nantes
Diffusion models for generating images: algorithms and (a bit of) theory
Denoising diffusion models are the state-of-the-art method for image generation (Ho et al., 2020). The goal of this talk is to give a tutorial on the algorithms for training and sampling from these models. We will take a mathematician perspective on the algorithms, that is, not insist on implementation choices, but rather on the mathematical grounding of the methods which, interestingly, borrows ideas from the 2000s (Hyvärinen, 2005 ; Vincent, 2011). The talk is inspired by the notes of Coste (2023). If time allows, we will present more personal recent contributions on fine-tuning diffusion models and other stochastic samplers (Marion et al., 2024).
Bibliography
Coste, Notes on Diffusion models, 2023. Accessible at https://scoste.fr/posts/diffusion/.Ho, Jain, Abbeel, Denoising diffusion probabilistic models. NeurIPS 2020.
Hyvärinen, Estimation of non-normalised statistical models by score matching. Journal of Machine Learning Research, 6(24):695-709, 2005.
Marion, Korba, Bartlett, Blondel, De Bortoli, Doucet, Llinares-Lopez, Paquette, Berthet, Implicit diffusion: Efficient Optimisation through Stochastic Sampling, arXiv:2402.05468, 2024.
Vincent, A connection between score matching and denoising autoencoders. Neural Computation, 23(7): 1661-1674, 2011.
Biography:
Pierre Marion is a postdoctoral researcher at EPFL in Lausanne, Switzerland. He graduated from Ecole polytechnique with an engineering degree, then from Sorbonne Université with a PhD in mathematics. He has done research stays or internships at University of Montreal, Google, UC Berkeley. His research focuses on the theory of deep learning, more precisely on optimization and statistical properties of neural networks.