Skip to content

About us

We focus on practical challenges in building reliable LLM applications. Our sessions cover hallucination detection, Agent quality, domain-specific benchmarking, and AI observability.

Topics include:

  • Hallucination patterns in production systems
  • Testing and evaluating LLM outputs
  • Domain-specific AI benchmarks
  • AI application monitoring and tracing
  • Risk management for high-stakes AI deployments
  • Compliance requirements for AI systems

Attendees are typically AI engineers, ML researchers, and technical leaders working on LLM applications in regulated industries like healthcare, finance, and legal tech.

All skill levels welcome.

Upcoming events

No upcoming events

Group links

Organizers

Photo of the user Miriam
Miriam

Members

13
See all