Data Security in the Age of AI: Threats & Opportunities
Details
Data Security in the Age of AI: Threats & Opportunities
Hosted by Imperva-Thales | Tel Aviv | December 16, 2025 | 17:30-19:30
Join us at Imperva-Thales Israel for an exclusive evening dedicated to one of the most urgent topics in modern technology: securing data in the age of AI.
As organizations adopt LLMs, vector databases, and AI-driven automation, new security challenges emerge-alongside exceptional opportunities for innovation.
This meetup brings together leading experts from Imperva-Thales and Cyera to explore cutting-edge research, real-world threats, and practical solutions for securing data in the age of AI.
📍 Location
Imperva-Thales Tel Aviv Offices
125 Menachem Begin Street, HaYovel Tower, 27th Floor
Entrance & Directions:
- If entering from the Menachem Begin ground floor, please note entry is through a government building
- Take the elevator to the 25th floor, then take the stairs or the internal elevator to the 27th floor
- You may park in the Kiriyah government parking lot (you can find it through Waze), accessible via the tunnel on Kaplan St. (Car excite is from -2Floor)
- No Entry for Scooters
🕒 Schedule | Tuesday 16.12.2025
17:30–18:00 – Gathering, refreshments & networking
A chance to meet data security professionals, researchers, and engineers from Israel’s leading AI and cybersecurity ecosystem.
18:00–18:30 - Lecture 1
Securing Vector Databases: Protecting a Key Component of Modern AI
Speaker: Yohann Sillam, Security Researcher, Imperva-Thales
Vector databases have become a cornerstone of modern AI systems, powering semantic search, retrieval-augmented generation, and other LLM-based applications. As their use expands, so does their exposure to new and often overlooked security challenges.
In this talk, we'll begin by examining common architectures and real-world use cases that rely on vector databases, highlighting how deeply they're embedded in today's AI ecosystems. We'll then discuss main security threats these systems face, including indirect prompt injections, adversarial manipulations, data drift risks that can undermine the integrity of the system. Finally, we'll look at practical ways to protect and monitor your data, using both supervised and unsupervised techniques such as classification, clustering to detect anomalies and maintain trust in your AI pipelines.
Attendees will leave with a clearer understanding of why vector databases represent both a strength and a vulnerability in modern AI, and what steps can be taken to secure them effectively.
18:30–19:00 - Lecture 2
The Missing Layer: Understanding Documents Beyond Data Elements
Speaker: Shiran Bareli, VP Research, Cyera
For years, data security and data understanding have focused on extracting patterns, the “trees” in the forest, entities, tokens, regular expressions. But many of the world’s most sensitive or valuable files contain none of these clues. A board meeting summary, a product roadmap or a clinical protocol each convey meaning beyond any single data element.
In this talk, we present a generative AI approach to file-level classification: a model that infers what a document is rather than what it contains. Our research combines foundation-model fine-tuning with large-scale generative labeling, and novel dataset curation techniques that preserve semantic diversity while removing redundancy. The result is a classifier that can describe previously unseen document types, cluster emergent concepts across domains, and adapt sensitivity to organizational context, all without predefined label sets or per-tenant training.
This work demonstrates how blending LLM semantics, domain fine-tuning, and dataset engineering enables systems that not only see the trees, but finally understand the forest.
19:00–19:30 - Lecture 3
SQL-Data-Guard: Policy-Driven Validation for Safe LLM-Database Interactions
Speaker: Ori Nakar, Principal Engineer, Imperva-Thales
Large Language Models (LLMs) are increasingly used to generate database queries and interact with structured data. However, these AI-driven interactions introduce security risks, including unsafe query generation, over-permissive access, and injection attacks.
In this talk, we present SQL-Data-Guard, an open-source tool that enforces schema-aware policies to validate and secure SQL queries generated by LLMs. SQL-Data-Guard blocks unsafe operations, detects malicious payloads, and can rewrite queries to ensure compliance with organizational policies. By intercepting unsafe queries before they reach the database, it prevents unauthorized access and enforces safe interactions. SQL-Data-Guard is available both as a Python package and as a Docker image, making it easy to integrate into a variety of LLM-powered applications and deployment environments. A containerized MCP wrapper which transparently applies sql-data-guard policies to MCP-compatible database services. It preserves the MCP interface, requiring no changes to clients or servers. We will demonstrate how decoupling policy enforcement from application logic enables safer, auditable, and more reliable AI-to-service workflows. While our example focuses on SQL, the underlying approach can be adapted to secure other LLM-generated operations, offering a generalizable framework for trustworthy AI-driven automation.
🎯 Who should attend?
- AI engineers & architects
- Data security professionals
- Researchers and ethical hackers
- Developers building LLM-powered applications
- Anyone passionate about securing the future of AI
Sessions are in either Hebrew or English.
We look forward to hosting you at Imperva-Thales!
An inspiring evening, full of learning, networking, and deep technical insights into the future of AI and data security.
