Self-Hosting Models Is for Nerds: A Practical Guide to Local AI Dev


Details
A developer’s take on Ollama, Llama.cpp, and building smart with your own gear
You don’t need a GPU cluster or enterprise credits to build powerful AI systems. In this talk, we’ll walk through how modern developers can create fast, cost-effective local environments using tools like Ollama, Llama.cpp, and LM Studio. From laptops to homelabs, we’ll explore how to self-host models, simulate production-like APIs, and prototype real-world use cases—without relying on expensive cloud setups. Whether you’re debugging embeddings or building an AI assistant, this session will show you how to take control of your stack and scale up—on your terms.
Presented by Miriah Peterson

Every 2nd Thursday of the month until September 9, 2025
Self-Hosting Models Is for Nerds: A Practical Guide to Local AI Dev