From Models to Systems: Building Production-Ready AI Agents*Please note that this meetup has been **rescheduled to 13 May 2026**. If you previously registered and are unable to attend on the new date, we kindly ask that you cancel your registration to free up space for others. We appreciate your understanding and look forward to seeing you there.*
\*\*\*\*\*\*\*\*\*\*\*\*\*\*\*
Join us for an evening exploring what it really takes to move AI agents from impressive demos to reliable, production-ready systems. Weโll hear from industry leaders building and deploying AI agents in real-world, high-stakes environments.
**Date & Time**
Wednesday, ***13 May*** 2026
6:00โ8:00 PM
***
## Agenda:
โฑ 18:00โ18:30: Gathering, mingling & light refreshments
โฑ 18:30โ19:15: Lecture 1 โ **Charm Security**
* Making LLMs work for your problem\, not theirs \| Speaker: Aviv Nahon\, AI & ML Data Scientist
* From Demo to Real World System: What Makes a ReAct Agent Production\-Ready \| Speaker: Hovav Schreiber\, AI Engineer
โฑ 19:15โ20:00: Lecture 2 โ **Mate Security**
* Autonomous but Accountable: Running AI Agents in a High-Stakes Environment
Speakers: Guy Pergal (Co-Founder & CTO), Avi Rosenberg (Founding Engineer), Yuval Maayan (Founding Engineer)
***
***
**From Demo to System โ What Makes a ReAct Agent Production-Ready**
Who's in charge (LLM vs code), what happens when things break, and how easy it is to extend โ adding tools, MCP servers, schemas, and models.
***
**Making LLMs work for your problem, not theirs**
Making LLMs work for your problem, not theirs.
***
**About Charm Security**
*Charm Security* is building the Agentic AI Workforce to prevent and resolve scams and human-centric fraud. Our AI Agents combine deep fraud-operations expertise with behavioral psychology to guide real-time prevention, intervention, and resolution. They act as expert teammates to fraud, financial crime and frontline teams, improving both efficiency and overall intervention and resolution effectiveness.
\*\*\*
**Autonomous but Accountable: Running AI Agents in a High-Stakes Environment**
*Mate Security* is at the forefront of AI agent development, building agents that run incident investigations today, in Fortune 500 SOCs.
As build and release playbooks are rewritten daily, the session will cover some of the key areas of focus, allowing Mate's agents to be trusted in production by the world's largest organizations, including:
* Working with thousands of sensitive agent tools: How do we govern and select the right one while considering the context window?
* Always improving: how do we govern and eval AI agents when prompts and models keep evolving?
***
We look forward to seeing you all in person