IN-PERSON: Apache Kafka® Meetup - Mar 2026
Details
Join us for an Apache Kafka® meetup on March 5th from 6:00pm hosted by Salesforce!
📍 Venue:
Salesforce
333 Seymour St #700, Vancouver, BC V6B 5A7
Social Lounge
IMPORTANT: Please arrive on time so that we can check you in, many thanks!
🗓 Agenda:
- 6:00pm - 6:30pm: Doors Open, Networking, Pizza and Drinks
- 6:30pm - 7:00pm: Yaroslav Tkachenko, Founder, Irontools
- 7:00pm - 7:30pm: Katie Forbes, Software Engineer, Confluent & Luyi Xiao, Software Engineer, Confluent
- 7:30pm - 8:30pm: More networking, Q&A
💡 Speaker One:
Yaroslav Tkachenko, Founder, Irontools
Talk:
Making Kafka Serialization Fast Again: Stop Trusting Defaults
Abstract:
Serialization is often the slowest part of your data streaming stack, especially in systems with a lot of components and moving parts.
Using binary formats like Avro or Protobuf is a great improvement over JSON, but the commonly used serialization libraries are not always the fastest option.
Last year, I went on a mission to build the fastest Kafka deserializer for Apache Flink data sources, targeting JSON and Avro formats. I learned about the two main techniques for optimizing performance in data systems: vectorization and code specialization. I’ll share how these learnings can be practically applied to serializers and deserializers. And these learnings are applicable to any Java Kafka client!
Bio:
Yaroslav Tkachenko is a Software Engineer, Consultant, and Advisor specializing in Data Streaming & Data-Intensive Applications. Currently, Yaroslav is a Founder at Irontools, building tooling for Apache Flink and consulting companies in the Data Streaming space.
Previously, Yaroslav was a tech lead at Shopify, Activision, and several startups.
💡 Speaker Two:
Katie Forbes, Software Engineer, Confluent & Luyi Xiao, Software Engineer, Confluent
Talk:
Orchestrating Global Kafka & Disaster Recovery at Scale
Abstract:
Traditional approaches to replicating data between Kafka clusters rely on external clients like MirrorMaker
2 or Replicator. Cluster Linking takes a fundamentally different approach - it integrates replication directly into the Kafka broker itself, treating the destination cluster as a "special follower" that leverages the native Leader/Follower protocol. The result: mirror topics that preserve offsets exactly, enabling clients to switch between clusters without reconfiguration.
In this talk, we'll start with the what and why of Cluster Linking, then take a deep dive into its flagship use case: Disaster Recovery. We'll walk through the full DR lifecycle - Failover, Reverse, and Restore - and peek behind the curtain at the internal mechanics that make it work, including the controller's role in fetching metadata from the source cluster and the background tasks that keep everything in sync without blocking the data plane. We'll close with an honest look at current limitations, including the lack of Exactly-Once Semantics across links and the need for client-side bootstrap reconfiguration after failover, along with a glimpse at what's on the roadmap.
Whether you're evaluating Cluster Linking for multi cloud, hybrid, or geo-distributed architectures, this talk will give you both the conceptual foundation and the technical depth to understand how it works under
the hood.
***
DISCLAIMER
NOTE: We are unable to cater for any attendees under the age of 18.
If you would like to host or speak at a meetup, please email community@confluent.io
