Skip to content

Details

Join us for an Apache Kafka® meetup on Thursday, Nov 6th from 6:00pm in Boston hosted by Labviva!

📍Venue:
Labviva
239 Causeway Street, Suite 500 (5th floor)
The main entrance to 239 Causeway is on Medford Street. You will see the appropriate signage directing you to the entrance. You can take either elevator to the 5th floor. When you get out of the elevator, please turn right as this is the main entrance to Labviva. The doors will be locked but I will be there to greet people and show them around.

🗓 Agenda:

  • 6:00pm: Doors open
  • 6:00pm – 6:30pm: Food/Drinks and networking
  • 6:30pm - 6:40pm Intro and Company Overview, Zach Sanders, Labviva
  • 6:40pm - 7:20pm: Bill Bejeck, Staff Software Engineer, Confluent
  • 7:15pm - 8:00pm: Additional Q&A and Networking

💡 Main Speaker:
Bill Bejeck, Staff Software Engineer, Confluent

Title of Talk:
Unpacking Serialization in Apache Kafka: Down the Rabbit Hole

Abstract:
Picture this: your Kafka application is humming along perfectly in development, but in production, throughput tanks and latency spikes. The culprit? That "simple" serialization choice you made without much thought. What seemed like a minor technical detail just became your biggest bottleneck.

Every Kafka record—whether flowing through KafkaProducer, KafkaConsumer, Streams, or Connect—must be converted to bytes over TCP connections. This serialization step occupies a tiny footprint in your code but wields outsized influence over your application's performance. For Kafka Streams stateful operations, this impact multiplies as records serialize and deserialize on every state store access.

You could grab a serializer that ships with Kafka and call it done. But depending on your data structure and use patterns, the wrong choice can cost you critical performance. The right choice can transform your application from sluggish to lightning-fast.

This talk dives deep into serialization performance comparisons across different scenarios. We'll explore critical trade-offs: the governance and evolution benefits of Schema Registry versus the raw speed of high-performance serializers. You'll see real benchmarks, understand format internals, and learn exactly when to apply each approach.

Whether you're building low-latency trading systems or high-throughput data pipelines, you'll leave with concrete knowledge to optimize one of Kafka's most impactful—yet overlooked—components. Don't let serialization be your silent performance killer.

Bio:
Bill has been a software engineer for over 18 years. Currently, he is working at Confluent as a Staff Software Engineer. Previously, Bill was an engineer on the Kafka Streams team for three-plus years. Before Confluent, he worked on various ingest applications as a U.S. Government contractor using distributed software such as Apache Kafka, Spark, and Hadoop. Bill has also written a book about Kafka Streams titled "Kafka Streams in Action" and the second edition just release April 2024 (https://www.manning.com/books/kafka-streams-in-action-second-edition).

***
If you are interested in hosting or speaking at a meetup, please email community@confluent.io

Sponsors

Sponsor logo
Confluent
Organizers, F&B

Members are also interested in