From Prototype to Production: Securing AI Agents against Real-World Attacks


Details
Most teams deploying LLM applications focus on functionality while overlooking the fundamental security challenges of systems that can be manipulated through natural language. Join Steve Wilson, AI CPO at Exabeam, author of "The Developer's Playbook for Large Language Model Security" (O'Reilly), and OWASP Top 10 for LLMs project lead, as he reveals the real vulnerabilities threatening production AI systems and the practical defences that enable secure deployment:
Real-World Security Threats
- Prompt injection attacks against enterprise systems (Microsoft Copilot, Salesforce)
- The "dead grandmother napalm recipe" and other social engineering tactics
- Why traditional web security approaches fail with LLM applications
- Hands-on exploration of Steve's open-source Chat Playground
Production Security Architecture
- Zero-trust principles for AI systems with autonomous capabilities
- Risk-based permission models that prevent "excessive agency"
- Guardrail implementation using Meta's Prompt Guard and OpenAI Moderation APIs
- Comprehensive logging of all prompts, responses, and system interactions for SIEM analysis
- Anomaly detection and correlation to identify suspicious agent behaviour patterns
This isn't theoretical security - it's practical guidance from the collective wisdom of 400+ industry experts who created the OWASP Top 10 for LLMs. Steve's O'Reilly book distills this expertise into immediately actionable techniques. Whether you're adding AI features to existing applications or building agent-first systems, you'll leave with immediately actionable techniques to secure your LLM deployments against real-world attacks.
Agenda
- 12:00 - 12:10 PM: Welcome and community updates
- 12:10 - 12:45 PM: Main presentation with real-world examples
- 12:45 - 1:20 PM: Q&A and Chat Playground exploration
- 1:20 - 1:30 PM: Wrap-up and networking
Who Should Attend: AI engineers, security architects, CTOs, and developers deploying LLM apps and AI agents who need practical security implementation strategies. Perfect for teams moving from AI prototypes to production or securing existing autonomous systems.
Format: Virtual lunch & learn (12:00-1:30 PM EST Friday). Grab your lunch and join us!

Sponsors
From Prototype to Production: Securing AI Agents against Real-World Attacks