Skip to content

Self-Hosting AI LLMs: Deploying Ollama on Azure Container Apps

Photo of Dan Colón
Hosted By
Dan C.
Self-Hosting AI LLMs: Deploying Ollama on Azure Container Apps

Details

Curious about running Large Language Models on your own infrastructure? Want to break free from API rate limits and usage costs? Join us for an engaging session where we dive into Ollama, the sleek, developer-friendly way to run LLMs like LLaMA 3, Mistral, and more—right on Azure!

In this talk, we’ll walk through how to containerize and self-host Ollama using Azure Container Apps. You’ll learn how to:

  • Deploy container workloads with ease
  • Integrate Ollama into your own apps for private, cost-efficient inference
  • Secure and scale your AI services in a cloud-native way
  • See a live demo of an end-to-end LLM deployment

Whether you're building private copilots, intelligent chatbots, or just exploring the edge of open-source AI, this session will equip you with the tools to own your AI stack.

🎯 Who Should Attend:
Cloud engineers, AI enthusiasts, developers, architects, and anyone curious about running AI models on their own terms.

🛠️ Tech Stack:
Azure Container Apps, Ollama, Open Source LLMs (LLaMA, Mistral, etc.), Docker, Compute Workloads

Bring your curiosity, your beverage of choice, and your best Teams background. We’ll figure the rest out together.
📅 Date: September 16th, 2025
📍 Location: The Internet
🎟️ Admission: Free, like unsolicited opinions on Stack Overflow

Photo of Cloud NH group
Cloud NH
See more events
Online event
Link visible for attendees
FREE