Skip to content

About us

This group is for engineers, builders, and AI teams who care about designing, testing, and shipping reliable voice and agentic systems at scale.

We focus on practical methods for simulating and evaluating voice agents before and after deployment — uncovering reliability gaps, measuring reasoning accuracy, validating tool and function calls, tracking latency, stress-testing failure handling, and enforcing guardrails across complex, multi-step workflows.

Members explore how modern teams:

Simulate thousands of realistic voice and text interactions

Continuously evaluate reasoning and tool use across workflows

Generate synthetic test scenarios beyond handcrafted prompts

Measure agent reliability with actionable performance metrics

Monitor production behavior with structured observability

If you're building AI agents, working on agent reliability, evaluation frameworks, guardrails, red teaming, or production monitoring — this community is for you.

We host workshops, live demos, hackathons, and discussions focused on real-world agent testing and performance engineering.

Upcoming events

1

See all

Group links

Organizers

Members

4
See all
Photo of the user mattekstrom
Photo of the user Maulik Shah
Photo of the user Naomi Pinto
Photo of the user Salvatore Coluccia