Skip to content

Details

### πŸ” About the Session

Large Language Models (LLMs) are the brains behind modern AI systems β€” but like any system, they can be tricked, manipulated, and exploited.
This session dives deep into how LLMs think, how they can be attacked, and how to build defense strategies that keep your models β€” and data β€” secure.
You’ll walk away with a practical understanding of LLM vulnerabilities, attack surfaces, and defense frameworks you can apply immediately in your AI workflows or enterprise systems.

***

### πŸ’‘ What You’ll Learn

  • How LLMs actually β€œreason” β€” and why that matters for security
  • Real-world attack techniques: prompt injection, context manipulation, data poisoning, and jailbreaking
  • Live demo of an exploit (and how to defend against it)
  • Building safe AI pipelines: sanitization, isolation, and guardrails
  • The emerging Red Team, Blue Team, and Purple Team roles in AI defense

***

### βš™οΈ Format

90 minutes (60 mins talk + 30 mins live demo & open Q&A)
Expect practical insights, examples, and a real-time demo.

***

### 🧩 Who Should Attend

  • AI developers and data scientists
  • Cybersecurity professionals exploring LLM defense
  • Tech leaders and architects deploying private LLMs
  • Anyone curious about how AI systems can be hacked β€” and protected

***

### πŸš€ Why Join ADA (AI Defense Alliance)?

ADA is building a practitioner-led community focused on AI & LLM Security β€” from fundamentals to red teaming.
If you want to be part of shaping how enterprises defend the next generation of intelligent systems, this is where you start.

AI/ML
Artificial Intelligence
Artificial Intelligence Startups
Cybersecurity
Software Security

Members are also interested in