Skip to content

Details

Shadow Agents: Weaponizing Autonomous AI Workflows and
Securing the Invisible Hand

As AI-native architectures continue to reshape modern development,
many organizations are unknowingly introducing invisible, autonomous
agents into their CI/CD pipelines, cloud infrastructure, and DevSecOps
workflows. This talk explores how these “shadow agents” such as
LLM-driven decision layers, auto-scaling triggers, autonomous
remediation bots, and AI-integrated deployment tools can be
weaponized by threat actors if not properly governed.

We'll walk through:

  • Real-world and theoretical attack paths targeting autonomous AI
  • workflows.
  • How AI-driven automations can be manipulated for lateral
  • movement, privilege escalation, and data exfiltration.
  • Gaps in current threat modeling practices when applied to
  • self-operating agents.
  • Strategic defenses for securing the “invisible hand” of AI in modern

pipelines.

Key Takeaways:

  • Awareness of how AI-native tools introduce new attack surfaces.
  • The importance of observability, explainability, and boundaries in
  • AI automation.
  • Practical guidance for securing AI-augmented CI/CD pipelines
  • and workflows.

A video demo will be included in the session to illustrate one of the core risks discussed: the weaponization of an AI-augmented automation tool within a CI/CD pipeline. The demo will walk through how a seemingly helpful autonomous agent (like an LLM-driven remediation script or deployment bot) can be manipulated to escalate privileges or leak sensitive data.

Given the complexity of live environments, the demo will be pre-recorded to ensure reliability, but I’ll guide the audience step-by-step through the logic, actions, and implications.

Artificial Intelligence Applications
Artificial Intelligence Machine Learning Robotics
Cybersecurity
Software Development
DevSecOps

Sponsors

Sponsor logo
Snyk
Develop fast. Stay secure.

Members are also interested in