DAM: Local LLM Prototyping

Details
Registration closed.
The workshop will take place at two times - at 3:00 PM and 4:30 PM.
Due to the room's capacity, the number of spots is limited.
Join us for an exciting hands-on workshop where you'll learn how to set up, deploy, and use large language models (LLMs) locally on your own device. There is no need to rely on cloud services—this event will show you how to use the power of LLMs directly on your computer for a variety of use cases.
What You'll Learn:
- Step-by-step guidance on deploying LLMs with Rancher Desktop, Ollama, and Python 3.12 through Miniconda.
- How to use models for specific applications like:
- Writing assistant: for use cases like proofreading or improving your email content
- Code assistance and debugging (integrating LLMs with VSCode IDE to boost productivity in software development).
- Building custom chatbots for customer service or personal assistants.
Prerequisites:
All attendees must have access to device with the following minimum requirements:
- 16 GB of RAM
- Nvidia GPU or Apple M-series chip
Additionally, please have the following tools installed prior to the event:
- Ollama LLM engine: https://ollama.com
- Execute following command after successful installation:
- `ollama pull llama3`
- `ollama pull starcoder2:3b`
- MSTY UI: https://msty.app/
- IDE VSCode: https://code.visualstudio.com/
- Continue VSCode extension for code completion: https://marketplace.visualstudio.com/items?itemName=Continue.continue
Speaker: Marián Ferenc (Application Architect, CIC IBM Slovakia)
The workshop runs in the frame of all-afternoon meetup Sunset or Sunrise of GenAI?
https://www.meetup.com/machine-learning-meetup-kosice/events/303918371

DAM: Local LLM Prototyping