Threat Modeling AI Agents (Prompt → Tool → Prod)
Details
Most AI incidents aren’t “model hacks.” They’re workflow hacks — where prompts, tools, permissions, and data flows collide.
You’ll learn:
- The “Prompt → Tool → Action” chain: how it breaks in production
- Attacker goals: data exfil, privilege abuse, transaction fraud, lateral movement
- Threat modeling patterns for RAG, connectors, agents, IDE copilots, and MCP/tool plugins
- Control mapping: what to log, restrict, test, and continuously validate
Agenda:
- Welcome + context
- Deep dive talk
- Workshop: build a threat model for a real AI assistant (hands-on)
- Q&A + networking
Join the community (free):
- Slack (free workshops + closed community calls + Open Source): https://join.slack.com/t/aisecuritycommunity/shared_invite/zt-3l88a89lw-NvdP6d9Wa0zGLxsv8aSk7Q
- WhatsApp (announcements + quick updates + Opportunities): https://chat.whatsapp.com/CQoDbFi4V8jAxgYgBBQPk7
Call for Proposals (CFP) — Meetups + June Conference:
We’re opening CFP for speakers/workshops/panels for both:
- Monthly meetups (Pune & Bangalore)
- AI Security Conference (June 2026)
Submit your talk/workshop idea on #cfp channel in slack.
Suggested themes: agent security, IDE/copilot security, MCP/tooling security, red teaming, governance & compliance, detection/IR, secure AI SDLC, case studies.
In association with:
1. Guard0: https://guard0.ai
2. More coming soon.
Events in Bangalore, IN
Artificial Intelligence
Deep Learning
Application Security
Cybersecurity
Open Source
