We’re showcasing projects from the Dallas AI Summer Program. Each team will walk through their problem, architecture, model/tooling choices, and lessons learned. Expect real talk on what worked, what didn’t, and what they’d change next.
Who should come?
Builders, tinkerers, devs, product folks, students—anyone curious about practical AI systems and agentic patterns.
1) Fitness Agent & Food App Agent
A data-driven personal trainer that adapts. Habit-building meets meal planning. This team will show an agent that nudges fitness goals and a companion agent that helps with food choices and tracking
2) Luma – Learn Smarter from YouTube Videos
Turn passive watching into active learning. Luma gives AI-powered summaries, chat-with-video, smart notes/flashcards, and learning progress tracking—so tutorials actually stick. The team will dive into video ingestion, segmenting, semantic indexing, prompt design for note/flashcard generation, and guardrails to keep answers grounded.
3) VoxPilot – Private, Voice-Augmented Browsing (On-Device)
Browse by voice with natural commands—no scripts, no server data. VoxPilot runs on-device, blending speech-to-text, NLU, and safe action execution for accessibility, productivity, and privacy. The deep dive covers command parsing, action routing, DOM interaction, offline STT/NLU choices, and the security model.