Skip to content

Details

Voice AI doesn’t have to live in the cloud.

In this talk, I’ll walk through how I built a fully offline, end-to-end voice application, running speech-to-text, LLM reasoning, and text-to-speech entirely on the edge, without relying on external APIs or internet connectivity.

We’ll explore how open-source LLMs (Mistral via Llama.cpp) can be embedded locally, how Python-based STT/TTS pipelines integrate cleanly into a .NET architecture, and how MAUI enables a modern cross-platform UI for voice-first experiences.

This session is practical and architecture-driven, covering real tradeoffs around latency, memory, model size, UX, and privacy, and includes a live demo of an offline voice assistant running locally.

If you’re building mobile apps, edge AI systems, or privacy-first AI products, this talk is for you.

AI summary

By Meetup

Practical talk on building an offline end-to-end voice assistant with .NET & MAUI for mobile developers and privacy-focused AI teams, featuring a live demo.

Related topics

You may also like