Skip to content

Details

If you’re deploying AI chatbots today, attackers are already testing them—learn how to secure LLM before it’s too late.

AI chatbots built on LLM are rapidly moving into production—often without sufficient security guardrails. This session exposes how attackers exploit real-world AI systems using prompt injection, jailbreaks, and data exfiltration techniques, and why traditional application security alone is not enough.

Through live attack demonstrations, the speakers will show how LLM–powered chatbots can be manipulated to bypass controls, leak sensitive data, and violate compliance expectations. The session then shifts to practical defense patterns, covering how to harden AI systems using Azure AI Content Safety, Azure Key Vault, identity controls, and network isolation.

Attendees will leave with a production-ready security mindset for building and deploying AI chatbots—focused on prevention, not post-incident recovery.

###

AI and Society
Artificial Intelligence
Artificial Intelligence Applications
Artificial Intelligence Machine Learning Robotics
Microsoft Azure

Sponsors

Sponsor logo
Microsoft
Microsoft

Members are also interested in