Skip to content

Details

LLM-powered apps and agents flows behave like a new entry point into your system: similar to an API, but far less predictable.
The best place to manage that risk is before a request ever reaches the LLM, using an “incoming guardrails” layer.
This implementation-minded talk starts with a quick refresher on how LLMs work, what “agents” are, and what people mean by agentic flows.
We will then shift into a practical, software-engineer-friendly approach for adding incoming guardrails before requests hit a model.
We will cover common checks like prompt-injection, malicious intent, toxicity, and out-of-scope requests, as well as how to recognize higher-risk cases like potential self-harm or medical emergencies and route them to an escalation path.
You’ll leave this talk with a simple reference architecture you can adapt to your own stack.

Speaker
Eyal Wirsansky is a Staff AI Engineer at Aingelz Inc., and an AI adjunct professor at Jacksonville University.

Agenda:
6:00-6:20: Networking
6:20-6:30: Introductions
6:30-7:30: Main presentation

Online Link
This will be a hybrid meetup. You can join us online at:
https://meetn.com/eyalscommunity

Related topics

Events in Jacksonville, FL
Artificial Intelligence
Artificial Intelligence Applications
Java
Software Development
Technology

You may also like