Security Guardrails and Distributed Inferencing for AI workloads
Details
RSVP HERE:
https://reg.experiences.redhat.com/flow/redhat/3708021/redhatfieldeventsprereg/page/landingregistrationpage?sc_cid=RHCTN1260000477026
Moving AI from pilot to production requires a strong security foundation.
This talk is essential for AI architects, platform engineers, and security teams.
We will cover:
- Optimizing High-Performance Inferencing: Go deep into the engineering requirements for deploying models at scale. Discuss strategies for minimizing latency, managing GPU/vLLM orchestration, and implementing cost-effective inferencing patterns that maintain high throughput for real-time applications.
- Engineering Autonomous Agents & Tool Use: Transition from static chatbots to dynamic, agentic AI systems.
- Hardening the AI Production Stack: Gain hands-on insights into defending the LLM lifecycle. From securing the inferencing endpoint to mitigating prompt injection and data leakage, learn the technical strategies necessary to move AI from a "lab experiment" to a secure, production-ready service.
Presenters: 
Prasanna Sivaramakrishnan, Principal Solutions Architect, Red Hat
Eric Ji, Senior Solutions Architect, F5 Networks
Join us at F5 Networks in San Jose on April 29 from 5:00 PM to 6:30 PM PT.
Spaces are limited. Secure your spot today
Your Red Hat User Group Team
F5 Networks
3545 N First St · San Jose, CA
##
