AI tech talks - CLIP and Stable Diffusion


Details
Introduction
If you're an enthusiast of Machine Learning and Artificial Intelligence, join us for our upcoming Meetup!
We'll begin by exploring the CLIP model and the experiments we've conducted to enhance its performance. We'll also discuss how we trained a smaller CLIP model, which could have significant implications for AI development in the future.
In part two, we will look at the intricacies of fine-tuning SD models, controlling their behavior with ControlNet, preserving identity during training, and generating data using SD models.
Our agenda includes an open mic session for anyone who wants to share their own insights or experiences. And, we'll end the night with a happy hour, giving you the chance to meet and mingle with like-minded people.
Who this is for
This event is for AI enthusiasts who work at the intersection of text and images, particularly those who focus on multi-modal models that can interpret both text and images. If you're interested in exploring models that understand the relationship between text and images, then this event is perfect for you.
Topics/speakers
1. Experiments with CLIP model (Vinay Sisodia, ML Tech Lead, PicCollage)
2. Making Good Use of Stable Diffusion (Joey Yang, Sr. Engineer, Perfect Corp.)

AI tech talks - CLIP and Stable Diffusion