Coding LLM APIs — From Cloud to Local

Hosted By
Alfred E.

Details
In this workshop, we’ll write real code to interact with both proprietary LLMs (like OpenAI and Anthropic) and open-source local models (like those run via Ollama).
This is your chance to move from conceptual understanding to building working prototypes using Python and REST APIs.
1. Coding a Proprietary LLM API (OpenAI / Anthropic)
- Set up API keys securely
- Write Python code to send a prompt and receive a response
- Handle parameters (temperature, max tokens, etc.)
2. Coding a Local LLM API with Ollama
- Start an Ollama server and test locally
- Use Python (requests or similar) to send prompts
- Compare performance and output with proprietary APIs
3. Bonus: Wrapping It in a Simple App
- Create a basic CLI or web interface (e.g., Streamlit or Flask)
- Think about extending the code for your own projects

Vibe Code AI for Absolute Beginners
See more events
Online event
Link visible for attendees
Coding LLM APIs — From Cloud to Local
FREE