Building trust and security in AI


Details
Are AI systems really safe?
How can we build trust in a world where AI is everywhere?
What are the best ways to protect sensitive data while still driving innovation?
Do you find yourself grappling with these questions? If so, donโt miss our next AI Talks event:
๐๐ฎ๐ข๐ฅ๐๐ข๐ง๐ ๐ญ๐ซ๐ฎ๐ฌ๐ญ ๐๐ง๐ ๐ฌ๐๐๐ฎ๐ซ๐ข๐ญ๐ฒ ๐ข๐ง ๐๐
Join us for an evening of insightful discussions with leading experts as we explore the critical challenges of AI, trust, and security. Discover practical strategies to secure generative AI, protect open-source software, and strike the right balance between privacy and progress in AI.
๐
Date: March 12th 2025
๐ Time: Doors open at 6:00 PM
๐ Location: Cantersteen 47, Central Gate, 1000 Brussels
Important to know:
To attend, please make sure to fill out the registration form on the page. Completing this form is necessary to secure your spot at the event!
๐๐๐ฅ๐ค #๐: ๐๐๐๐ฎ๐ซ๐ข๐ง๐ ๐๐๐ง๐๐ - ๐๐ฎ๐๐ซ๐๐ซ๐๐ข๐ฅ๐ฌ ๐๐ ๐๐ข๐ง๐ฌ๐ญ ๐๐ฆ๐๐ซ๐ ๐ข๐ง๐ ๐ญ๐ก๐ซ๐๐๐ญ๐ฌ
Agentic AI is on the rise, powered by the immense capabilities of LLMs. But along with new opportunities come fresh challenges. In this session, we uncover how hallucinations can derail workflow autonomy, why prompt injections pose a growing threat when impactful actions are being taken, and how the sheer breadth of the input-output space makes it tough to cover every edge case. We then share hands-on strategies to keep your AI initiatives secure and resilient. Join us to discuss how we can stay one step ahead in this rapidly evolving landscape.
By Thomas Vissers & Tim Van Hamme (Blue41, KU Leuven)
๐๐๐ฅ๐ค #๐: ๐๐-๐๐ซ๐ข๐ฏ๐๐ง ๐๐ข๐ฅ๐ญ๐๐ซ๐ข๐ง๐ : ๐๐จ๐ฐ ๐๐๐๐ฌ ๐๐ฎ๐ญ ๐ญ๐ก๐ซ๐จ๐ฎ๐ ๐ก ๐๐๐ฅ๐ฌ๐ ๐ฌ๐๐๐ฎ๐ซ๐ข๐ญ๐ฒ ๐๐ฅ๐๐ซ๐ฆ๐ฌ
Static Application Security Testing (SAST) is a vital approach for identifying potential vulnerabilities in source code. Typically, SAST tools rely on predefined rules to detect risky patterns, which can produce many alerts. Unfortunately, a significant number of these alerts are false positives, placing an excessive burden on developers who must distinguish genuine threats from irrelevant warnings. Large Language Models (LLMs), with their advanced contextual understanding, can effectively address this challenge by filtering out false positives. By reducing the number of alarms, developers are more willing to take action and actually solve the real vulnerabilities.
By Berg Severens (AI Specialist at Aikido)
๐๐๐ฅ๐ค #๐: ๐๐ญ ๐ญ๐ก๐ ๐จ๐ญ๐ก๐๐ซ ๐ฌ๐ข๐๐ ๐จ๐ ๐ญ๐ก๐ ๐ฌ๐ฉ๐๐๐ญ๐ซ๐ฎ๐ฆ: ๐ฌ๐ฆ๐จ๐ฅ (๐ฏ๐ข๐ฌ๐ข๐จ๐ง) ๐ฅ๐๐ง๐ ๐ฎ๐๐ ๐ ๐ฆ๐จ๐๐๐ฅ๐ฌ ๐ข๐ง ๐๐๐๐
Large language models have grown to ever larger sizes, but thereโs another interesting development at the other side of the spectrum: small language models (SLMs) that can be self-hosted, which can be a great option for data privacy and protection. In this talk, weโll briefly discuss what small language models are capable, the tooling around them, and how you can use them to balance innovation with privacy and security in your GenAI projects.
By Laurent Sorber (CTO and Co-founder at Superlinear) & Niels Rogge (Machine Learning Engineer at Hugging Face)
Why join us?
At the AI Talks by Superlinear, youโllโฆ
- Network with top AI professionals from Belgium
- Dive into practical AI applications shaping industries
- Enjoy great finger food, drinks, and inspiring conversations
Want to be part of this unique event? Be sure to fill out the form on the event page to secure your spot!

Building trust and security in AI