May 20 - Image Generation: Diffusion Models & U-Net Workshop

Details
When and Where
- May 20, 2025
- 6:30 PM to 8:30 PM CET | 9:30 AM to 11:30 AM Pacific
- Workshops are delivered over Zoom
About the Workshop
Join us for a 12-part, hands-on series that teaches you how to work with images, build and train models, and explore tasks like image classification, segmentation, object detection, and image generation. Each session combines straightforward explanations with practical coding in PyTorch and FiftyOne, allowing you to learn core skills in computer vision and apply them to real-world tasks.
In this session, we’ll explore image generation techniques using diffusion models. Participants will build a U-Net-based model to generate MNIST-like images and then inspect the generated outputs with FiftyOne.
These are hands-on maker workshops that make use of GitHub Codespaces, Kaggle notebooks, and Google Colab environments, so no local installation is required (though you are welcome to work locally if preferred!)
Workshop Resources
You can find the workshop materials in this GitHub repository.
About the Instructor
Antonio Rueda-Toicen, an AI Engineer in Berlin, has extensive experience in deploying machine learning models and has taught over 300 professionals. He is currently a Research Scientist at the Hasso Plattner Institute. Since 2019, he has organized the Berlin Computer Vision Group and taught at Berlin’s Data Science Retreat. He specializes in computer vision, cloud technologies, and machine learning. Antonio is also a certified instructor of deep learning and diffusion models in NVIDIA’s Deep Learning Institute.

Sponsors
May 20 - Image Generation: Diffusion Models & U-Net Workshop