Skip to content

Details

NOTE: To get into the event, you need to be allowed in by security. Please be sure to RSVP at THIS LINK as this will be the official list we give security.

Welcome to our Summer meetup where will be focusing on Cloud Native security!

We have a great lineup of speakers from Nimrata and Kodem Security

Admission Control for AI: Governing AI Agents and Tools with Kyverno
Admission Controllers secure Kubernetes by ensuring only "known-good" configurations reach our clusters. However, as we integrate AI Agents and the Model Context Protocol (MCP), we face the new challenge of applying similar rigor to non-deterministic AI actions. When an AI agent accesses a database or calls a tool, its intent must be validated against corporate policies, privacy standards, and least-privilege principles to ensure security and compliance.

In this session, Jim Bugwadia and Devi Sivakumar explore bringing Cloud Native policy management to Generative AI using kyverno-authz. We will demonstrate how to intercept agentic intentions, evaluate them against declarative policies, and move beyond static validation to real-time semantic authorization. Attendees will see live demonstrations of how to audit behavior and reject unsafe AI actions in real-time, maintaining the same standards of safety used for privileged containers in Kubernetes.

RCE in LLM Coding Agents: Lessons from Newly Disclosed Claude Code Vulnerabilities

Building on prior work covering denial-of-service and permission escape in LLM-powered coding agents, this session presents newly disclosed remote code execution (RCE) vulnerabilities in Claude Code, first introduced at RSAC 2026. We examine how these issues emerge in real-world cloud development environments, where agent autonomy intersects with credentials, CI/CD pipelines, and extension ecosystems. The talk distills common exploit patterns and translates them into practical guidance for securing agentic workflows across modern cloud-native stacks.

Related topics

You may also like