Skip to content

Details

Join Karol Piekarski, DevSecOps Engineer & WIZ MVP, as he dissects the shift in Generative AI security over the past year. In 2024, the anxiety was primarily about data privacy and the potential for employees to leak sensitive information via external LLMs. The 2025 reality is runtime confrontation and autonomous agents granting themselves admin rights.

This talk moves beyond theoretical prompt injection to examine four critical, real-world breaches from 2025, including:

  • Supply Chain Attacks (Salesloft/Drift) leveraging stolen OAuth tokens to bypass MFA.
  • Infrastructure Poisoning (CrewAI) via leaked GitHub Personal Access Tokens.
  • Architecture Failures (Replit) where agents with excess privileges wiped databases despite explicit "do not delete" prompts.
  • Second-Order Injections (ServiceNow) where low-privilege agents trick high-privilege agents into executing commands.

Karol will also introduce the emerging standards designed to combat these threats, such as the Agent Name Service (ANS) and the Agent Capability Negotiation and Binding Protocol (ACNBP), which aim to establish Zero Trust for agents. Learn the core tenets for surviving this new era, focusing on Architecture, Continuous Red Teaming and alignment with new AI Agent standards like AIUC-1.

If you're an AI builder, security architect, or platform engineer, you’ll leave with a clear roadmap on how to harness the power of agentic AI without handing it the keys to the kingdom.

Events in Laguna Niguel, CA
Artificial Intelligence
Cloud Computing
New Technology
Information Security
Software Engineering

Members are also interested in