
What we’re about
AI is moving fast. Security, compliance, and governance are lagging behind.
This community is for builders, security professionals, and leaders who want to stay ahead of the risks. We focus on:
Breaking and defending LLM applications
Best practices in AI/LLM security
Real-world lessons on compliance and governance
Sharing tools, frameworks, and hands-on experiments
No hype. No fluff. Just a space to learn, share, and apply security-first thinking in GenAI projects.
Whether you’re a developer, architect, security engineer, or compliance leader, you’ll find peers here who are tackling the same challenges.
Upcoming events (1)
See all- Breaking & Securing LLMs: Real-World Risks Every AI Builder Must KnowOJone , Bengaluru
Event Details:
Description:LLMs are transforming applications, but they come with hidden risks. In this 2.5-hour hands-on session, we’ll uncover real-world security challenges with LLMs — from prompt injection to data leakage — through live demos and practical remediation strategies. We’ll also touch on governance and compliance considerations shaping enterprise adoption in 2025 and beyond.Agenda:
Opening & Context
Live Demos: Breaking LLMs
Securing LLM in Practice
Governance & Compliance overviewWho Should Attend:
LLM developers, security engineers, and product leads integrating LLMs into apps who want to understand how to build safe, trustworthy GenAI systems.Event Location: https://maps.app.goo.gl/wS4kVoaAhRBqKcv48
Registration Link:
https://forms.gle/8Er8MKj4CmmYQkEDASpeaker:
Nanda Kumar LinkedIn Profile