Breaking AI Systems: Lessons from Adversarial ML to LLM Red Teaming
Details
AI systems are far more powerful, and far more fragile, than most people realise. This session takes a practical journey into how AI actually fails, from classic adversarial attacks on computer vision models to modern vulnerabilities in large language models and agentic systems.
Drawing on real-world examples and research-backed insights, the session will demystify AI security and challenge the assumption that these systems are inherently robust or "intelligent". Rather than focusing on deep theory, the goal is to change how you think about AI: exposing hidden failure modes, highlighting real risks that matter in production, and showing why security, observability, explainability, and rigorous evaluation must be foundational, not afterthoughts, when deploying AI at scale.
About the Speaker:
Camilo is the Head of Data Science at VGW, leading applied AI initiatives across fraud detection, risk, operations, and experimentation at scale. He holds a PhD specializing in AI security, adversarial machine learning, and model robustness, with multiple peer-reviewed publications on attacks and defenses for neural networks. His doctoral research was supported by DARPA and conducted in collaboration with leading researchers in the field.
With over a decade of industry experience spanning regulated and high-risk domains, Camilo bridges academic research and real-world production systems. He is a strong advocate for rigorous evaluation, explainability, observability, and security-by-design when deploying AI systems to production.
Agenda:
5pm - arrival and mingle
5.30pm - presentation start
7pm - wrap up
