👁️ The AI Watchman: Vision Models & Content Moderation
Details
👁️ The Problem
Remember our Stable Diffusion session? We had a blast generating AI art, but here's the thing - sometimes these models produce... unexpected results. Extra fingers, weird artifacts, or content that makes you go "whoa, that's not what I asked for."
What if AI could watch AI? Enter local vision models.
👁️ What We're Doing
This Thursday we're taking a whirlwind tour of self-hosted content moderation. We'll take images fresh out of an AI image generator and run them through a local vision model that can actually see what's in the picture and decide if it passes muster.
The whole pipeline runs on your own machine. No cloud. No Big Tech looking at your cat portraits.
👁️ The Tour
- Vision Models 101 - What they are and how they "see" images
- Teaching AI What to Look For - Crafting prompts that get consistent answers
- Parsing AI Output - Getting structured data out of fuzzy responses (yes, regex makes an appearance)
- Batch Processing - Scanning hundreds of images automatically
- The Reject Pile - Quarantining the questionable stuff with proper logging
👁️ Who's This For?
Anyone curious about running vision AI locally. If you were at the Stable Diffusion event, this is the natural next step. If you're new, no worries - we're keeping it high-level and concept-focused.
Bring your questions and your curiosity!
