Streamlit Basics & LLM APIs


Details
Generative AI is opening the door to an entirely new kind of software development—fast, iterative, and powered by natural language. If you’re wondering how to actually build and deploy simple AI apps, this hands-on session is for you.
In this 30-minute workshop, we’ll introduce Streamlit, a powerful and beginner-friendly framework for creating AI-powered web apps with just a few lines of Python. You’ll build two working chat applications—one that runs locally with Ollama, and one that connects to OpenAI’s API.
### 🛠️ What We’ll Build
- A local chat app powered by an LLM running through Ollama
- A cloud-based chat app that connects to OpenAI’s API
### 🧰 Tools We’ll Use
- Streamlit – the fastest way to build interactive web apps in Python
- Ollama – for running local large language models on your machine
- OpenAI API – for accessing powerful foundation models in the cloud
- Python – no prior experience required, we’ll keep it light and clear
You’ll leave with two working prototypes, a better understanding of how LLM apps work under the hood, and the confidence to start building your own.
Before the session, we’ll send instructions to help you get set up so you can code along—not just watch.

Streamlit Basics & LLM APIs