Skip to content

Details

Enter the unseen war between red and blue teams in the world of enterprise AI.
This 2.5-hour workshop walks participants through the offensive and defensive sides of Large Language Model (LLM) security. We’ll explore prompt injections, data exfiltration, model manipulation, and realistic enterprise attack simulations—then pivot to defensive strategies for detection, monitoring, and mitigation.
Expect live demos, guided exercises, and a look at the emerging frameworks defining GenAI security in production environments. Whether you’re a security practitioner, AI researcher, or a defender tired of hearing “AI is safe,” this session will change how you see large language models forever.

### Agenda:

  • Introduction to Red vs. Blue in LLMs
  • Common Attack Vectors: Prompt Injection, Jailbreaks, Data Leaks
  • Live Red-Team Demonstrations
  • Blue-Team Detection & Mitigation Frameworks
  • Open Discussion: Building a Secure AI Future

### Who Should Attend:

Security professionals, AI developers, SOC analysts, red teamers, and anyone deploying GenAI in enterprise settings.

Speakers:
Archana - CyberSeurity Incident Manager
Nanda Kumar - Founder SaaviGenAI

### Organizer:

The Linux Foundation
SamosaCh.AI
SaaviGenAI – Building Safer, Smarter AI

Events in Bengaluru, IN
AI/ML
Artificial Intelligence
Application Security
Cybersecurity
Information Security

Members are also interested in