Security and AI


Details
Discover how AI is transforming cybersecurity. Join industry experts for talks, discussions, and networking as we explore innovative solutions and challenges at the intersection of security and AI.
Agenda:
17:30 - 18:00 - Food & Drinks
18:00 - 18:20 - When “Smart” Gets Dangerous – Lessons Learned from Hacking ChatGPT | Ron Masas, Imperva
18:25 - 18:55 - GenAI Under Siege: The Rise of PromptWare Attacks | Stav Cohen, Technion
19:00 - 19:25 - Package Hallucination | Bar Lanyado, Lasso Security
RSVP now to secure your spot!
(Talks will be given ** in Hebrew **)
When “Smart” Gets Dangerous – Lessons Learned from Hacking ChatGPT
-----------------------------------------------------------------------------------------------------
What happens when a “smart” system becomes a bit too clever for its own good? In this talk, I’ll dive into the technical details of vulnerabilities I uncovered in ChatGPT, including two cross-site scripting flaws, a mass assignment, and a broken function-level authorization issues. I’ll explore how these issues can lead to serious risks, such as account takeovers, and persistent unauthorized access.Beyond the technical specifics, I’ll examine how the design of conversational AI introduces unique security challenges, broadening the attack surface in unexpected ways. I’ll also highlight how the AI’s “intelligence” can amplify these risks, making exploitation easier and more impactful.This session offers practical insights into attacking AI-powered applications, lessons drawn from real-world exploitation, and strategies for navigating the complexities of systems that are both incredibly powerful and inherently vulnerable.
Ron Masas, Lead Vulnerability Researcher at Imperva
GenAI Under Siege: The Rise of PromptWare Attacks
------------------------------------------------------------------------
Generative AI (GenAI) has seen a rapid rise in adoption, becoming integral to countless applications across industries. Companies are increasingly integrating GenAI into their workflows, leveraging its capabilities to streamline processes and deliver innovative user experiences. Tools such as Retrieval-Augmented Generation (RAG) and autonomous AI agents further enhance GenAI’s utility, enabling more robust information retrieval, task automation, and decision-making. Frameworks like Plan and Execute, which allow agents to decompose tasks into actionable steps, exemplify these advancements.
However, alongside these innovations, GenAI systems expose significant vulnerabilities that attackers can exploit. Among these risks is a critical new category of threats known as PromptWare—a suite of attacks that exploit GenAI’s susceptibility to crafted inputs.
This talk explores the concepts of GenAI agents, Plan and Execute frameworks, and RAG integration, followed by an examination of four key PromptWare attack variants that highlight the risks of GenAI integration:
1. AI Worms: Self-replicating malicious prompts that propagate through GenAI systems, disrupting operations.
2. Information Stealers: Attacks that extract sensitive or proprietary data from RAG-based applications.
3. Denial-of-Service (DoS) Attacks: Exploits that cause GenAI-powered agents to waste computational resources or fail entirely.
4. Advanced Promptware Threats (APwT): Sophisticated PromptWare attacks targeting applications without prior knowledge, performing real-time reconnaissance, reasoning, and executing malicious activities.
Stav Cohen, Data Science PhD candidate at the Technion | AI Security
Package Hallucination by Ophir Dror
--------------------------------------------------
This session will delve into our research of a new attack technique leveraging GenerativeAI models (like ChatGPT, Gemini and more) for spreading malicious packages in multiple programming languages (Python, Node.Js, .NET, Go, and Ruby) . The presentation will cover the motivation behind undertaking this study, the research methodology employed and results, as well as a demonstration of a Proof of Concept (PoC) and an example of package hallucination that was downloaded to tens of thousands of servers and developers devices.
Bar Lanyado, Security Researcher at Lasso Security

Security and AI