PyData PDX IRL: Text-to-Image Diffusion Models


Details
Hello my lovely data enthusiasts! The time has come once again for us to gather together under one roof and share the fruit of knowledge and also like a soda or something. Come and ...
Explore the New Hotness: Text-to-Image Diffusion Models
A few years ago, we marveled at photo-quality renderings of people who don't exist or imaginary cats.
Now, we can ask Dall-e-2 or Stable Diffusion to show us "Abraham Lincoln underwater playing the accordion, painted in the style of Monet" ... and a few seconds later it pops up.
More recently, multiple text-to-video (!) papers have emerged. But how did we get here, and how do these so-called latent diffusion models work?
In this talk, we'll do a high-level but technical review of the mind-blowing developments in text-to-image that have happened over the past 2 years, with the goal of understanding Stable Diffusion.
We'll also work through a minimal, "Hello World" diffusion model to pull back the curtain and take some of the magic away -- it's just a few dozen lines of Python code to build from scratch.
We conclude by looking at what we still haven't figured out about how and why these approaches actually work.
Indoors and in person for the first time in more than 2 years! We're looking forward to seeing all your smiling faces. We're being graciously hosted by Tech Academy in their downtown offices. We'll get together at 5:30 and talks will begin about 6:15.
Please RSVP YES if you're coming, and not ... if you're not ... so we can manage physical space and drink requirements!
Thanks to our sponsors:
numfocus.org
COVID-19 safety measures

Sponsors
PyData PDX IRL: Text-to-Image Diffusion Models