
De quoi s'agit-il
About TorontoAI
TorontoAI is a vibrant, inclusive community of engineers, builders, founders, and curious minds passionate about making AI infrastructure more accessible, human-centered, and scalable.
We host bi-weekly in-person socials, tech meetups, and hands-on webinars to connect people across disciplines — from DevOps to Data Science, from students to senior architects. Whether you're deploying LLMs in production or just exploring what Databricks does, you're welcome here.
🤝 We’re Building More Than a Meetup
In a world dominated by virtual everything, we believe in real, human-to-human connection.
TorontoAI is a space to:
- Share ideas over coffee
- Spark collaborations face-to-face
- Meet people who understand your stack and your journey
- Build your network beyond LinkedIn likes
💬 What We Talk About:
- Scalable AI & LLM infrastructure (Kubernetes, GPUs, vLLM, Ollama, LangChain)
- Databricks, Snowflake, Fivetran, dbt — building the modern data stack
- MLOps, LLMOps, DevOps — the operational glue of AI systems
- Real-world engineering stories, founder spotlights, and tool breakdowns
🌈 Who We Welcome:
- DevOps, SREs & Platform Engineers moving into data/AI
- Data Engineers, Analysts & ML practitioners
- Founders, freelancers, and technologists in transition
- Students and early-career professionals seeking real-world exposure
We’re committed to creating a welcoming, diverse, and equity-focused space where all voices matter — no gatekeeping, no rockstars, just good humans building cool stuff.
📍 Based in Toronto, open to the world
📅 Join an event — and be part of something human, helpful, and hands-on.
Événements à venir (4+)
Tout voir- Fine-Tune AI Model on H100 with Hugging FaceLien visible pour les participants
### 🔧 Webinar: Fine-Tuning Google Gemma with QLoRA on H100 GPU (Linux Setup + Hugging Face)
🚀 What You'll Learn:
- How to set up a Linux VM with an NVIDIA H100 GPU (Denvr AI Cloud) for ML workloads
- Installing and configuring NVIDIA drivers and CUDA correctly (no more version mismatch headaches!)
- Fine-tuning Google Gemma using QLoRA and Hugging Face Transformers
- Using real-world Text-to-SQL datasets to train LLMs efficiently
- How to save, merge, and test your fine-tuned models
👨💻 Live Demo Includes:
- End-to-end GPU environment setup
- Installing PyTorch, TRL, PEFT, and related libraries
- Downloading and preparing Hugging Face datasets
- Launching Jupyter Notebook to train and run inference
- Model merging and deployment tips
🎯 Key Takeaways:
- Master CUDA + PyTorch compatibility for training
- Optimize for low memory training with QLoRA
- Easily transition from fine-tuning to inference-ready models
📍 Who Should Attend:
- ML Engineers, Data Scientists, MLOps Practitioners
- Anyone training open LLMs (Gemma, Mistral, LLaMA, etc.)
- Developers struggling with driver/CUDA issues on GPUs
We will be using Denvr AI Compute for this Webinar - https://www.denvrdata.com/
Événements passés (247)
Tout voirLiens de groupe
Thèmes connexes
- Nouvelle carrière
- Développement logiciel
- Nouvel arrivant
- Ingénierie logicielle
- Networking et recherche d'emploi
- Networking professionnel
- Recrutement
- Data science avec Python
- Apprentissage automatique avec Python
- Startups
- Startups tech
- Startups Internet
- Lancement d'une entreprise de E-commerce
- Incubateurs de startups
- Incubateur