Prompt-Engineering for Open-Source LLMs(Online)π»ππ€
Details
Turns out prompt-engineering is different for open-source LLMs! Actually, your prompts need to be engineered when switching across any LLM β even when OpenAI changes versions behind the scenes, which is why people get confused why their prompts donβt work anymore. Transparency of the entire prompt is critical to effectively squeezing out performance from the model. Most frameworks struggle with this, as they try to abstract everything away or obscure the prompt to seem like theyβre managing something behind the scenes.
But prompt-engineering is not software engineering, so the workflow is entirely different to succeed. Finally, RAG, a form of prompt-engineering, is an easy way to boost performance using search technology. In fact, you only need 80 lines of code to implement the whole thing and get 80%+ of what you need from it (link to open-source repo). Youβll learn how to run RAG at scale, across millions of documents..π₯§π€ππΊ
Sign up for the online event hereβ
ββββββββββββββββββββββββββββββββββββββββ
https://www.eventbrite.com/e/prompt-engineering-for-open-source-llms-tickets-791669814727
