AI Security Meetup
Details
Hey everyone,
Join us for our next AI Security meetup on March 26th, once again at the Snyk offices in central London, featuring 2 new talks.
And as always, there’ll be drinks, pizzas, and a great crowd — so expect a fun evening as well.
Don't forget to RSVP now!
Talk 1 - Securing AI with AI: Building a Security Gate for Production Agents
In production, any input could be malicious. This talk covers how my team built a security validation layer for an AI agent at Snyk. I’ll walk through the attack categories we defend against (prompt injection, system manipulation, data exfiltration, role-play bypasses), and the unexpected challenges that come with securing an agent that's purpose-built for security workflows.
Speaker: Jada Ross - AI Systems Engineer at Snyk. Jada is an AI Systems Engineer at Snyk, building production AI systems that help teams move faster and smarter. She's shipped RAG pipelines, LangGraph agents, and knowledge systems, embedded her work across engineering, go-to-market, support, and strategy teams at Snyk.
Talk 2 - Architecting Safe Autonomy - High-Stakes Autonomous Agents Need Deterministic Checkpoints for safety
In the rush to build autonomous agents, we face a fundamental tension: the more freedom we give an LLM to solve complex problems, the more likely it is to drift into “stochastic hallucinations” or policy violations. Most safety efforts focus on restricting the model via prompts, which often stifles the very reasoning capabilities required for sophisticated tasks.
This talk introduces InsideOut, a design pattern that prioritizes agent autonomy by implementing deterministic checkpoints. Rather than micro-managing the agent’s “thoughts,” the InsideOut architecture allows the agent to navigate freely through a task—provided it periodically “grounds” its progress in structured, verifiable artifacts such as JSON or Markdown.
We will walk through a real-world application of this method: an agent designed for setting up and managing cloud infrastructure. The agent is given the autonomy to discuss and decide on Features, Stack Components, Configurations, Cost Estimates, Terraforms, Deployment, and Management. To ensure reliability, each stage requires the generation of a JSON artifact validated against deterministic rules. If a boundary is breached, the agent is triggered to repeat that specific stage until the output is within bounds, preventing error propagation.
By drawing a parallel to the concept of a Brownian Bridge versus Brownian Motion, we illustrate how these checkpoints act as “pins” that anchor a random walk. This approach demonstrates how forced artifact crystallization allows an agent to design and deploy complex stacks with high independence and zero “drift.” This session provides a framework for building agents that are more productive because they are safely unconstrained.
Speaker: Hossein Kakavand Co-Founder of Luther Systems. Hossein did his Ph.D. at Stanford University. He has been with several start up in AI, ML and Distributed Systems, with IPOs on NASDAQ and LSE. He is currently a Co-Founder of Luther Systems focused on solving the Enterprise Operations problem at scale.

