LLMs for Disease Research - Fine-tuning Llama 3.3 70b


Details
Interested in learning how to fine tune a Llama model with research documents, so that it can answer questions about various diseases (e.g. cancer) without RAG? This meetup is an informal kickoff to hopefully get some volunteers working on building a fine-tuning pipeline for Llama 3.3 70b (and even Llama 3.2 8b) with cancer research papers. We'll compare a RAG solution with a fine tune as part of this project. If it works well, this model would be deployed to Azure or AWS and we'll find researchers to kick the tires and evaluate how 'good' it is (human eval).
Come if you are vaguely familiar with LLMs. We'll make sure to give newcomers to generative AI some starter projects to learn tools like Jupyter notebooks, Langchain, etc. Bring your macbook pro if you have one, but any laptop will do. If you don't have a laptop, come anyway.
For this project to succeed we will need a couple people willing to dedicate some time, it may take a couple months to get a usable model ready for production.
So what do you think? Can a fine tuned model perform as well as a RAG solution? Better? Are you curious to find out and learn how to fine-tune a Llama model? We'd love to have you.

LLMs for Disease Research - Fine-tuning Llama 3.3 70b