From Hacks to Defenses. Securing AI in the Real World
Details
Artificial Intelligence is evolving at an incredible pace, but so are the risks that come with it. As AI systems become deeply embedded in real-world applications, security is no longer optional, it’s foundational.
Join us for a deep dive into one of the most critical challenges in modern AI: how to build and deploy secure, resilient models in an increasingly adversarial landscape.
This event brings together two experienced AI leaders who will explore both the strategic and technical sides of AI security:
- Gelu Vac, Fractional CTO & Tech Strategist at MedicalPilot, will examine how AI systems can be compromised, and what it truly means to build models that are secure by design.
- Doru Rotovei, AI Architect & Head of AI at NirvSystem Corp, will take a technical deep dive into LLM security, breaking down how jailbreaks work and the defense mechanisms used to protect models in production.
💡 What you’ll learn:
- How AI models are attacked in real-world scenarios
- The concept of “secure by design” in AI systems
- The anatomy of LLM jailbreaks
- Practical defense strategies for production environments
- How to balance rapid innovation with responsible AI deployment
👥 Who should attend:This session is designed for professionals actively working with AI systems, including:
- AI enthusiasts who use AI at work in their day-to-day activities and want to get more in depth with these must-know aspects
- Software engineers and AI practitioners
- CTOs, tech leads, and decision-makers
- Security engineers exploring AI threats
- Founders, researchers, and consultants in AI
Whether you're building LLM-powered products, working in regulated industries like healthtech or fintech, or simply trying to understand the evolving AI threat landscape. This event will give you actionable insights and a clearer perspective on what it takes to build secure AI systems today.
At the end of the event, after the keynotes, we invite you for Networking & Wine 🍷.
Register here: https://luma.com/r50klrr7
