Run LLMs locally: build your own offline AI infrastructure
Details
Since not everybody could assist to the last presentation on local LLM, we will hold again.
During the event Jean will present how to use LLM locally. Here a description of the presentation.
Cloud AI works... until it doesn't.
It exposes your data, is inaccessible during outages, and locks you into ever-increasing costs.
A self-hosted, offline LLM, powered by your own hardware and fine-tuned with your own data and business needs, gives you control, privacy, and resilience that the cloud can't match.
This presentation shows you how to build your own “Local LLM Deployment": a fast, private, and affordable local AI stack that works anytime, anywhere, using open-source solutions... without any dependencies on third-party solutions.
The presentation will be held at the grande bibliotheque
475 Boul. de Maisonneuve E, Montréal, QC H2L 5C4
room 2.130.1
The goal of this "informal" event is to learn about AI and discuss it together with the host.
Join us if you're interested!
