Skip to content

Details

Retrieval Augmented Generation (RAG) allows us to interact with LLM's while providing some context. This is useful when asking the LLM a question about some specific input document.

I'll show you how to get this up and running on your local machine even with minimal RAM and CPU. Bring your laptop we can debug any issues you may run into.

Looking forward to seeing you on Saturday! During the meetup we'll talk about:

  • Why RAG?
  • Discuss how it works
  • Download LLM model
  • Start LLM server via Ollama
  • Create RAG code
  • Verify it works by uploading document and
  • asking a document-specific question

This is a virtual event, so we'll meet via Discord. The meeting link will be sent one day before for attendees.

Artificial Intelligence
New Technology
Blockchain
Python
Jupyter Notebooks

Members are also interested in