Skip to content

Details

You must register to attend: https://www.eventbrite.com/e/prepared-tested-compliant-the-modern-incident-response-strategy-tickets-1977194947312

### Topic One: AI Didn’t Break Cybersecurity - it revealed where our controls end, and human judgment begins.

Cybersecurity leaders are being asked to secure systems that learn, adapt, and increasingly reason. Yet many organizations still approach AI as a perimeter problem—something to lock down, restrict, or “keep out.”

That instinct is understandable—and increasingly insufficient.

In this session, Philip Topham challenges the traditional “gatekeeper” posture toward AI and cybersecurity. Drawing on his experience advising boards, executives, and security leaders, he reframes AI not as an external threat to be contained, but as a powerful co-thinking capability that—if properly framed—can strengthen judgment, accelerate insight, and improve security outcomes.

Attendees will explore:

  • Why AI amplifies both risk and responsibility.
  • How poor problem framing creates security failure before tools are even deployed.
  • The shift from control-based security to judgment-based security
  • Practical ways security leaders can engage AI without surrendering accountability.

This is not a talk about tools, prompts, or hype. It is a conversation about how cybersecurity leaders can evolve their role—from defending the gates to shaping how decisions are made in an AI-driven world.

### Speaker One: Philip Topham

Philip Topham is the Founder of Savionai and an AI thought leader who works with boards, executives, and leadership teams to improve decision quality and organizational performance in an AI-driven world. He is the author of CRAFT Thinking™, a practical framework for using AI as a co-thinking partner—strengthening human judgment rather than replacing it. Philip speaks regularly to executive and technical audiences on how AI changes the way decisions are framed, evaluated, and governed.

### Topic Two: You did WHAT with AI?!?!

Last month, an AI helped Ron Dilley find and fix four critical security vulnerabilities in C code in under an hour. Last week, an AI-powered social network exposed 1.5 million API keys because its creator "didn't write a single line of code" and skipped basic security controls.
Welcome to 2026, where "You did WHAT with AI?!?!" can be said with wonder... or horror.

This talk explores both sides through real-world case studies. On the "good" side: AI-assisted bug hunting in legacy C, optimized threat detection, robust prompt injection defenses, and more. On the "bad" side: so many examples, including the spectacular OpenClaw/Moltbook hilarity, where "vibe coding" without threat modeling led to exposed credentials and prompt injection vulnerabilities.

Attendees will leave with a practical defense framework, measurable AI security metrics, and two core principles: 1) Expert-in-the-loop and 2) Content must inform but never control.

The question isn't whether AI will transform security; it already has. Which side are you on?

### Speaker Two: Ron Dilley

Ron Dilley is a Distinguished Cybersecurity Innovator with over 20 years of security leadership experience, including 14 years as VP & CISO at Warner Bros Entertainment Group. He serves as an IANS Research Faculty, focuses on architecture and R&D at ISSquared, and is involved in many fun projects. Ron documents his AI-assisted security research at iamnor.com, where he maintains a "brutally honest" chronicle of what works, what fails, and what catches fire. He holds patents in TCP state tracking and threat deception and still writes C code that runs as root—because some lessons are best learned the hard way.

AI summary

By Meetup

Two-topic session for cybersecurity professionals delivering a tested Incident Response Plan aligned with NIST 800-171; covers GRC resilience and AI governance.

Related topics

Events in Culver City, CA
Computer Security
Cybersecurity
Network Security
Web Security
Information Security

You may also like