
What we’re about
AI supply chain security, MCP, A2A, ACP, LLM vulnerabilities, AI red teaming, Guardrail development, AI threat landscape and more..
Are you passionate about the exciting advancements in Generative AI and Agentic AI? Do you want to dive deep into the critical discussions surrounding safety, security, and governance in this rapidly evolving field? Join a vibrant community of AI enthusiasts, data scientists, AI governance leaders, security professionals, and researchers as we explore the opportunities and challenges in securing the future of AI.
Our monthly meetups focus on the intersection of cutting-edge AI technology and the critical need for robust security practices. From understanding vulnerabilities in large language models (LLMs) to sharing best practices for safeguarding multi-agent AI systems, we aim to foster meaningful dialogue, collaboration, and innovation in AI safety.
What You Can Expect:
In-depth discussions on topics like LLM vulnerabilities, AI pentesting, guardrail development, and the AI threat landscape.
Updates on the latest developments in the OWASP GenAI Security Project.
Insights into securing the AI supply chain and multi-agent communication protocols (MCP, A2A, ACP). Expert talks, panel discussions, and workshops from leading voices in AI safety and governance. Networking opportunities to connect with like-minded professionals and thought leaders. Practical lessons and case studies from customers and organizations tackling real-world AI security challenges.
Whether you're a seasoned AI scientist, a security officer, or simply an enthusiast eager to learn about AI safety, this is the perfect platform to engage, learn, and contribute to shaping the future of secure AI.
Join us in the heart of Amsterdam to be part of this exciting and impactful community!
Goals of the Meetup
Foster Collaboration: Build a community of AI practitioners and security professionals to share knowledge and expertise in securing GenAI and Agentic AI systems.
Raise Awareness: Highlight the importance of AI safety and security in the broader AI ecosystem.
Drive Innovation: Encourage the development of innovative tools, frameworks, and practices for securing AI systems.
Provide Education: Offer hands-on workshops, case studies, and expert talks to enhance members' understanding of AI vulnerabilities and security measures.
Stay Updated: Provide regular updates on the evolving AI threat landscape and emerging best practices.
Inspire Action: Motivate individuals and organizations to prioritize safety and security in their AI development processes.