LLM Guardrails


Details
Join us for an LLM Guardrails
“A car for a dollar” – believe it or not, someone managed to convince a customer service chatbot to sell a car at this price through clever prompting. Such incidents are part of a growing wave of attacks as LLMs become integrated into diverse applications. Want to learn how to prevent these kinds of exploits in your own application?
Join us for an interactive workshop where we’ll dive into the world of LLM Guardrails. Discover the mechanisms that ensure applications produce reliable, robust, safe, and ethical outputs, and understand their crucial role in LLMs. We’ll focus on implementing guardrails for essential safety measures and prompt injections. But it’s not just theory – you’ll get hands-on experience implementing your own guardrails using tools like NVIDIA’s NeMo Guardrails. By the end of the workshop, you’ll be a guardrail guru, ready to make your LLM applications safer, more accurate, and robust.
This event is perfect for engineers, data scientists, and AI enthusiasts eager to up their Gen AI game. Don’t miss this opportunity to expand your knowledge and network with like-minded professionals!
With this initiative, ML6 loves to support diversity in technology by collaborating with PyLadies Amsterdam. We hope to see you there!
Agenda
18:00 - Doors open, food and drinks
18:30 - LLM Guardrails workshop
20:00 - Networking
21:00 - Wrap up
GitHubRepo
https://github.com/pyladiesams/llm-guardrails-jul2024
Stream
https://www.youtube.com/live/1ajltQcEEzA
Any questions --> amsterdam@pyladies.com

LLM Guardrails