🤖 Build Your Perfect AI Companion: From Chatbot to Best Friend v2.0
Details
Back by popular demand! The first time we ran this event it blew up way beyond our usual crowd. This time we're doing it again with some improvements based on what we learned.
🔄 What's Different This Time
- Chat is open for questions and interaction throughout
- Refined walkthrough with clearer explanations and more examples
- Possibly a lighter model that's easier to run (still researching the best approach, but the results will be the same)
If you attended before and want a refresher, or missed it entirely, this is your chance.
📌 Prerequisite: Software Setup
This session assumes you already have Ollama and Open WebUI installed and working. If you don't, go to this event first:
👉 https://www.meetup.com/techalicious-club/events/313006406
We won't be covering installation during this session.
⚠️ Please Actually Read This
This isn't a passive webinar, but it's also not a live follow-along where everyone's doing it together in real time. I'll be demonstrating and explaining the techniques. You can apply them during the session or on your own time afterward.
If you're expecting ChatGPT or a polished corporate seminar, this isn't the right fit.
💻 Hardware Requirements (This Part Matters)
Running AI locally requires serious hardware. Here's the deal:
If you're on a Mac with Apple Silicon (M1/M2/M3/M4):
You have unified memory, meaning your system RAM is shared between CPU and GPU. A Mac with 32GB RAM can use almost all of it for AI models. This is the easiest setup.
- 16GB: Can run smaller models
- 32GB+: Comfortable for most models we'll discuss
If you're on a PC:
Your system RAM and your video card memory (VRAM) are separate. For AI models, VRAM is what matters, not your system RAM.
- 8GB VRAM: Can run smaller models
- 12-24GB VRAM: Comfortable range for what we're doing
- No dedicated GPU: Ollama falls back to CPU mode using system RAM. It works, but it's slow.
Having 64GB of system RAM but a weak graphics card won't help much. The model needs to fit in VRAM for decent speed.
About Windows: Ollama runs on Windows, but I built and tested this on macOS. I can't troubleshoot Windows-specific issues during the session.
Don't have the hardware? You're still welcome to attend and learn the concepts. You can apply them later when you upgrade or get access to better hardware.
🎯 What We'll Cover
- Character cards that actually maintain personality
- Scene-based prompting vs instruction lists
- Killing the cringy AI-isms that make chatbots feel fake
- Building companions with real voice and opinions
☕ What This Is (And Isn't)
This IS:
- A casual, coffee-chat style tutorial
- A demonstration with Q&A
- Run by a hobbyist for hobbyists
- Interactive, questions welcome as we go
This is NOT:
- A professional seminar or corporate webinar
- A polished production with slides and scripts
- About ChatGPT, cloud AI, or paid services
- A gaming session (GTA = Greater Toronto Area, not Grand Theft Auto)
Techalicious is a casual tech community. We explore stuff together, sometimes things go sideways, and that's part of the fun. If you need a buttoned-up presentation, this isn't your jam.
📝 No Recording, But Notes Available
This is live. No recording will be available.
The written tutorial will be available for purchase at Techalicious Academy (https://techalicious.academy). This is how we keep the group running. Meetup fees aren't cheap, and the tutorial sales help us keep doing free events like this one. Totally optional, no pressure.
