How to Instrument, Govern, and Debug Agents Before They Go Rogue
Details
For this online event, we will have the opportunity to receive Carl Lapierre as a speaker. Carl is a software developer with a decade of experience. he loves exploring the frontiers of technology with a keen eye on AI, particularly Natural Language Processing.
After a brief introduction of our community, Carl will deep dive on how to Instrument, Govern, and Debug Agents Before They Go Rogue.
Agents don’t just need prompts, they need supervision. As teams move from experiments to production, they often discover the hard way that generative systems are unpredictable, opaque, and prone to silent failure. Observability isn’t just a nice-to-have, it’s the only way to keep your agents aligned, your users safe, and your systems sane. In this talk, we’ll dive deep into the emerging discipline of LLM observability: what to track, how to track it, and why it matters. We'll compare purpose-built tools like LangFuse, LangSmith, and AgentOps with general observability frameworks like OpenTelemetry, and show how new GenAI semantic conventions make structured tracing possible at scale.
About Carl:
Carl is a Lead AI Engineer driving innovation through intelligent systems across healthcare, manufacturing, and mining. With over a decade of experience in software development, he leads high-impact AI initiatives focused on agentic systems, building solutions that enhance decision-making, automation, and adaptability in complex environments. At Osedea, he is currently spearheading next-generation AI projects, including the integration of autonomous agents into manufacturing ERPs.
