Skip to content

Details

Turns out prompt-engineering is different for open-source LLMs! Actually, your prompts need to be engineered when switching across any LLM β€” even when OpenAI changes versions behind the scenes, which is why people get confused why their prompts don’t work anymore. Transparency of the entire prompt is critical to effectively squeezing out performance from the model. Most frameworks struggle with this, as they try to abstract everything away or obscure the prompt to seem like they’re managing something behind the scenes.

But prompt-engineering is not software engineering, so the workflow is entirely different to succeed. Finally, RAG, a form of prompt-engineering, is an easy way to boost performance using search technology. In fact, you only need 80 lines of code to implement the whole thing and get 80%+ of what you need from it (link to open-source repo). You’ll learn how to run RAG at scale, across millions of documents..πŸ₯§πŸ€–πŸš€πŸˆΊ

Sign up for the online event here↓
↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓↓
https://www.eventbrite.com/e/prompt-engineering-for-open-source-llms-tickets-791669814727

Related topics

Artificial Intelligence
Machine Intelligence
Neural Networks
Data Mining
Data Science

You may also like