Skip to content

Details

By Kenn Jensen
Visit our Eventbrite Page to RSVP

With over 35 years of professional experience in the IT domain, Kens background spans programming, database and web development, call center technology and consulting, emergency services management solutions, automation, as well as enterprise security web filtering and enterprise network discovery and monitoring. By day, Ken works as a senior infrastructure engineer. By night, OSINT for good and AI experimenting and red teaming.

This talk breaks down LLM data poisoning in plain English, explaining how attackers can intentionally introduce misleading or malicious data during training or fine-tuning so a model learns the wrong patterns. You’ll see how even small amounts of manipulated data can influence outputs, create blind spots, or quietly weaken the reliability of AI-driven tools that organizations depend on every day.
We’ll walk through the most common entry points where poisoning can occur, including third-party datasets, user-generated inputs, and automated data pipelines. The session will also highlight real-world implications for security teams, from trust erosion and decision risk to operational and compliance concerns. Attendees will leave with a practical understanding of the guardrails organizations use to reduce risk, including data validation practices, monitoring strategies, and governance approaches that help keep AI systems trustworthy and resilient.

Related topics

Events in Virginia Beach, VA
Computer & Information Network Security
Cybersecurity
Web Security
Social Networking
AI Ethics

You may also like