The guardrails of the AI galaxy + Quarkus in Action book baptism

Details
Speaker: Martin Štefanko (Red Hat, https://x.com/xstefank, https://www.linkedin.com/in/martin-stefanko/)
Abstract:
The use of AI in critical, sophisticated scenarios is desirable, but we all know it’s essentially impossible. Why? The non-deterministic nature of LLMs makes them prone to hallucinations and unreliable outputs. So, what can we do when our LLM responds with nonsense?
Let us introduce you to the LLM guardrails. In this talk, we’ll dive into the Quarkus LangChain4j integration that provides you with the ability to verify and/or modify both requests and responses that are being exchanged with your model. Through practical examples, we’ll explore the options available for validating user-provided inputs, rewriting or retrying outputs, and even reprompting the model if needed.
The guardrails not only enhance the reliability of AI-driven applications but also allow us to build more trust in our AI systems, one response at a time. Come to learn how we can ensure our LLMs stay on track, even in the most challenging scenarios.
Bio:
Martin is a passionate Java developer, frequent conference speaker and avid user of LLMs. He is actively involved in Quarkus and owns multiple extensions.
--
Want to use AI in your workloads but afraid the LLM will make stuff up? Join us to find out how to prevent your LLM from going completely out of control.
The authors will also have a few printed copies of Quarkus in Action ready to be signed and give away!
We will also stream this time online at https://www.youtube.com/live/s2Tvi-pd29Q! If you plan to attend online, you don't need to register here since we're using this information to order food. This is also a tiny push for you to come in person!
See you there!

The guardrails of the AI galaxy + Quarkus in Action book baptism