IN PERSON! Apache Kafka® x Apache Flink® x Apache Iceberg x Grafana
Details
Join us for an Apache Kafka® x Apache Flink® x Grafana meetup on Thursday, Oct 2nd from 5:30pm hosted at Improving!
📍Venue:
Improving
1515 Central Ave NE, Suite 100, Minneapolis, MN 55413
🗓 Agenda:
- 5:30pm: Doors open
- 5:30pm – 6:00pm: Welcome & Food/Drinks, Networking
- 6:00pm - 7:00pm: Ryan Belgrave, Staff Software Engineer I, WarpStream
- 7:00pm - 8:00pm: Additional Q&A and Networking
💡 Speaker:
Ryan Belgrave, Staff Software Engineer I, WarpStream
Title of Talk:
From Ticker Tape to Trendlines: A Stream Processing Journey into Market Dynamics
Abstract:
Building a platform to analyze real-time market data can be a complex undertaking. This session details an end-to-end project for ingesting, processing, and visualizing data from an active and entirely digital marketplace. We'll focus on moving beyond simple metrics to uncover deeper economic trends and behaviors.
Here’s what I’ll be covering:
- Tapping the Data Firehose: I’ll explain how to use public APIs—the source of which might surprise you—to ingest a stream of real-time trading events, as well as how to effectively backfill years of historical data to get a complete market picture.
- Building a Pipeline with Bento and WarpStream: I’ll demonstrate how to use Bento to seamlessly capture this trading data and publish it into WarpStream, a diskless, Kafka-compatible streaming platform designed for the cloud.
- From Streams to Lakehouse with Tableflow and Iceberg: I'll showcase how WarpStream's Tableflow feature automatically materializes our streaming data directly into an Apache Iceberg data lakehouse.
- Visualizing the Market with Grafana: The processed data is then brought to life in Grafana. I will show how to build various dashboards to track price histories, perform complex aggregations, and create custom "market indexes" to gauge the overall health of the economy.
- The Benefits of a Modern Streaming Lakehouse: I’ll explain the technical reasons for choosing this specific stack and how its storage-based architecture helped build a powerful analytics system while minimizing complexity and cost.
If you're interested in stream processing, data engineering, analytics, or are just curious to see what it takes to analyze a complex and fascinating market, this session is not one to miss!
Bio:
Ryan Belgrave is a Sr. Principal Engineer and has been working in the Distributed Data Platforms space at Optum since 2018. Before he joined Optum he worked at Target on their Public Cloud team building their cloud application platform for running Target.com. Ryan specializes in all things containers, Kubernetes and Cloud and has a Home Lab running various CNCF software. While Ryan has only been officially working in the industry since 2016, he has been learning and working with all the various Linux and Cloud technologies since 2006.
***
DISCLAIMER
We don't cater to anyone under the age of 21.
If you are interested in providing a talk/hosting a future meetup, please email community@confluent.io