December Berlin Prometheus Meetup

Berlin Prometheus Meetup
Berlin Prometheus Meetup
Public group
Location image of event venue


Hey everyone!

Let's meet on Dec 12th for a last Prometheus Meetup in 2019!

18:30 - 19:00 - Arrival & networking with food and drinks
19:05 - 19:15 - Welcome and introduction
19:15 - 19:45 - Talk #1 - Prometheus metrics from host-local services: a case-study from Fedora CoreOS
19:50 - 20:00 - (Lighting) Talk #2 - Ephemeral Prometheus with Thanos
20:00 - 20:15 - BREAK
20:15 - 20:45 - Talk #3 - Cortex: Evolving to handle Trillions of samples a day


Talk #1: Prometheus metrics from host-local services: a case-study from Fedora CoreOS

This talk will show how host-local services can benefit from instrumentation and Prometheus metrics, using Fedora CoreOS auto-updates logic as a case-study. In particular this will cover how to instrument Rust services, how to expose Prometheus metrics without requiring a TCP port or an HTTP stack, and how to bridge metrics from local services to the cluster via a "local_exporter".

Luca likes Free Software, well-designed softwares, and memory-safe programming languages, He works as a developer at RedHat, focusing on CoreOS-style immutable OS.


Talk #2: Ephemeral Prometheus with Thanos

As part of Continuous Integration it makes sense to run benchmarks or other stress tests for which performance observability is key. These benchmarks or tests are usually automatically triggered by CI/CD pipelines and run many times a day. As the intention is to benchmark/test on new builds (reflecting the changes made to the code repository), the typical pattern is for the CI/CD pipeline to spin up an instance of the build, run the tests and then tear down that instance again. In other words, the deployments are ephemeral and only live for the duration of the run itself. Keeping performance data for these runs is a key requirement, both for bug triage as well as history to understand the evaluation of performance.

Christian Dickmann loves building systems, and catering to developers, but somehow ended up in Enterprise IT. He spent the last 8 years building/leading vSAN, VMware's Distributed Enterprise Storage product. He also founded and built the R&D cloud used for Dev & Test by thousands of engineers across VMware, and is a co-author of the CI systems built on top of it. Passionate about Developer Experience, scalable Cloud, highly available systems and storage, he recently decided to become more active in the Cloud Native community.


#Talk 3: Cortex: Evolving to handle Trillions of samples a day

Prometheus is the popular open-source monitoring system that is easy to use and scales quite well. We will start with a quick introduction of Prometheus and then move on to the problem of building a horizontally scalable, distributed version of Prometheus to handle _infinite_ scale: Cortex. We will look at the architecture, see how it is designed with decoupled ingest and query paths that can be scaled independently.

We will start with the original architecture of Cortex, see how it evolved and the bottlenecks and issues we hit that inspired the changes. We will see what the community is working on now and what the future holds for Cortex!

Goutham Veeramachaneni is a developer from India who started his journey as an infra intern at a large company where he worked on deploying Prometheus. After the initial encounter, he started contributing to Prometheus and interned with CoreOS, working on Prometheus’ new storage engine.
He is now an active contributor to the Prometheus eco-system and a maintainer for TSDB, the engine behind Prometheus 2.0. He works at Grafana Labs, on Cortex and open-source observability tools.
When not hacking away, he is on his bike adding miles and hurting his bum.


We're always looking for speakers. Please propose talks!

Follow us on Twitter!

Hope to see you all there!

Matthias & the Berlin Prometheus team