Skip to content

Details

Join us for an Apache Kafka® x Apache Flink® meetup on Tuesday, September 23rd from 6:30pm at Confluent London HQ!

IMPORTANT:
PLEASE BRING YOUR PHOTO ID FOR SECURITY PURPOSES.
SPACE IS LIMITED, PLEASE ARRIVE ON TIME TO ENSURE ENTRY TO THE EVENT, THANKS :)

🗓 Agenda:

  • 6:30pm – 6:45pm: Food/Drinks and Networking
  • 6:45pm - 7:15pm: Adi Polak, Director of Advocacy and Developer Experience Engineering, Confluent
  • 7:15pm - 7:45pm: Mehreen Tahir, Software Engineer, New Relic
  • 7:45pm - 8:15pm: Tom Scott, Founder & CEO, Streambased
  • 8:15pm - 9:00pm: Q&A and Networking

💡Speaker One:
Adi Polak, Director of Advocacy and Developer Experience Engineering, Confluent

Title of Talk:
Stream All the Things - Patterns of Effective Data Stream Processing

Abstract:
Data streaming is a really difficult problem. Despite 10+ years of attempting to simplify it, teams building real-time data pipelines can spend up to 80% of their time optimizing it or fixing downstream output by handling bad data at the lake. All we want is a service that will be reliable, handle all kinds of data, connect with all kinds of systems, be easy to manage, and scale up and down as our systems change.
Oh, it should also have super low latency and result in good data. Is it too much to ask?

In this presentation, you’ll learn the basics of data streaming and architecture patterns such as DLQ, used to tackle these challenges. We will then explore how to implement these patterns using Apache Flink and discuss the challenges that real-time AI applications bring to our infra. Difficult problems are difficult, and we offer no silver bullets. Still, we will share pragmatic solutions that have helped many organizations build fast, scalable, and manageable data streaming pipelines.

Bio:
Adi Polak is an experienced software engineer and people manager. For most of her professional life, she dealt with data and machine learning for operations and analytics. As a data practitioner, she developed algorithms to solve real-world problems using machine learning techniques and leveraging expertise in Apache Spark, Kafka, HDFS, and distributed large-scale systems. As a manager, she led teams, and together they embarked on innovative journeys in the ML space and came back with staggering insights and learnings. Adi has taught Spark to thousands of students and is the author of the successful book Scaling Machine Learning with Spark. Earlier this year, she began a new adventure with data streaming, specifically Flink and ML inference, and is hooked.

💡Speaker Two:
Mehreen Tahir, Software Engineer, New Relic

Title of Talk:
Monitoring Kafka-Based Applications Using Distributed Tracing with OpenTelemetry

Abstract:
By leveraging tools like Jaeger and New Relic, we’ll uncover how to gain a full view of your microservices, even in the face of Apache Kafka’s asynchronous nature. Join us for a live demo with a simple Java Spring-Boot app, where we’ll walk through both automatic and manual instrumentation to capture rich telemetry. We’ll also touch on infrastructure-level observability, pulling metrics and traces from Apache Kafka brokers and Apache Flink.

Bio:
Mehreen Tahir is a Software Engineer at New Relic. She specializes in machine learning, data science, and artificial intelligence. Mehreen is passionate about observability and the use of telemetry data to improve application performance. She actively contributes to developer communities and has a keen interest in edge analytics and serverless architecture.

💡Speaker Three:
Tom Scott, Founder & CEO, Streambased

Title of Talk:
Why Kafka + Iceberg Will Define the Next Decade of Data Infrastructure

Abstract:
Data leaders today face a familiar challenge: complex pipelines, duplicated systems, and spiraling infrastructure costs. Standardizing around Kafka for real-time and Iceberg for large-scale analytics has gone some way towards addressing this but still requires separate stacks, leaving teams to stitch them together at high expense and risk.

This talk will explore how Kafka and Iceberg together form a new foundation for data infrastructure. One that unifies streaming and analytics into a single, cost-efficient layer. By standardizing on these open technologies, organizations can reduce data duplication, simplify governance, and unlock both instant insights and long-term value from the same platform.

You will come away with a clear understanding of why this convergence is reshaping the industry, how it lowers operational risk, and advantages it offers for building durable, future-proof data capabilities.

Bio:
Long time enthusiast of Kafka and all things data integration, Tom has more than 15yrs experience in innovative and efficient ways to store, query and move data. Tom is currently CEO at Streambased a company focused on unifying operational and analytical data estates into a single, consistent and efficient data layer.

***
DISCLAIMER
We don't cater to attendees under the age of 18.
If you want to host or speak at a meetup, please email community@confluent.io

Events in London, GB
Apache Kafka
Big Data
Open Source
Apache Flink

Members are also interested in