Skip to content

Running Large Language Models locally 2/3

Photo of Manon
Hosted By
Manon and 3 others
Running Large Language Models locally 2/3

Details

Next to tinkering in the Makerspace of the OBA again, you can also join a workshop series on Running LLM's locally with the Ollama App that we're organizing to celebrate the Appril Festival

Being able to run a Large Language Model locally has a lot of advantages, next to not paying for a pro plan or API costs, it also means not sharing your chat data. Thanks to recent developments ('quantization') we now have models like Mistral 8x7B that run on your laptop! There are also many products that support you in running, creating and sharing LLM’s locally with a command line, like the open source app Ollama.
In this series of workshops we want to help you in setting up Ollama and running your local LLM’s. Ollama supports a range of models like Mistral, Llama2 and Phi. Every workshop consists of an introduction and has challenges on different levels to help you get started and broaden your knowledge. In this way the workshop will be interesting for both beginners and intermediate level participants. The idea is that participants also help and learn from each other. The evenings run from 19-21.30h

For beginners:
We assume you know how to work with the prompt on your laptop (command line). Please install Ollama beforehand. You can experiment locally with models & prompting.
For Intermediate:
We assume you're familiar with Github and you have basic knowledge of Python and Jupyler. An example of a challenge can be to develop a webinterface (also part of the second workshop).
More advanced challenges (have to be experienced in Python): develop a personalised assistant or running it on a raspberry pi. Using a webcam to take photos' and have the LLM describe the images with LLaVa

Workshop 2/3 (May 15th); making the most of Ollama on a variety of devices
Beginners: depending on the acquired knowledge and your interests shared in the first workshop we'll help you to build on.

  • Using & modifying the Python (provided with the model) to adapt it to your specific usecase.
  • New users can start with basics.
  • Show & tell

Workshop 3/3 (June 19th); Customize your LLM with your data
For example; working with a predefined database with questions/answers that is to be used by the model (provided that you create such a database beforehand), more advanced participants can also try to finetune the model locally on your way of communicating, for example by training it on your emails.

  • New users can start with basics.
  • Show and tell

Follow us on twitter: https://twitter.com/sensemakersa
or join us on Slack by providing us your emailadress.

Photo of Sensemakers Amsterdam group
Sensemakers Amsterdam
See more events
Amsterdam Public Library (OBA)
Oosterdokskade 143 1011 Amsterdam · Amsterdam
Google map of the user's next upcoming event's location
FREE
40 spots left