If you’re using Datadog, New Relic, Splunk, or a similar observability platform and you're struggling with rising costs or unclear insights this session is a must.
As AI workloads increase, so does the volume and complexity of your telemetry data. That’s why observability pipelines need to be just as intelligent and scalable.
Less Noise, More Signal: Hacking Your OpenTelemetry Pipeline
What if you could fine-tune your OpenTelemetry pipeline to reduce noise and improve insights? The real magic happens when you take control of your data flow.
In this session, we’ll dive into creating custom OpenTelemetry pipelines using the OpenTelemetry Collector. You’ll learn how to wire up receivers, processors, and exporters to build flexible, efficient pipelines tailored to your observability goals. We’ll explore common patterns like filtering noisy data, routing telemetry to multiple backends, and enriching spans with contextual metadata. Through a hands-on demo, you’ll see how easy it is to move from a default setup to fully customized pipelines.
If you’re ready to go beyond the defaults and make OpenTelemetry work for you, this session is your starting point.
### Bonus Roundtable: Designing AI-Native Products
Stick around after the main session for a short roundtable where we’ll shift gears and discuss:
- How to define AI-native use cases that ship
- Common product pitfalls when working with AI
- Collaboration patterns between PMs and platform teams