IN-PERSON: Apache Kafka® Meetup Kruibeke, Belgium - Oct 2024


Details
Join us for an Apache Kafka® meetup on Wednesday, October 9th from 6:00pm in Kruibeke, Belgium hosted by Cymo!
📍Venue:
Botaca, Rupelmondestraat 61, 9150 Kruibeke
***
🗓 Agenda:
- 6:00pm: Doors open
- 6:00pm - 6:30pm: Food, Drinks and Networking
- 6:30pm - 7:00pm: Benjamin Barrett, Senior Software Engineer, Cymo
- 7:00pm - 7:45pm: Danica Fine, Staff Developer Advocate, Confluent
- 7:45pm - 8:30pm - Additional Q&A & Networking
***
💡 Speaker One:
Benjamin Barrett, Senior Software Engineer, Cymo
Title of Talk:
KStreams on the RocksDB
Abstract:
Most of us are aware that statestores in Kafka Streams are backed by RocksDB. In this presentation I will attempt to demystify the following functionalities of RocksDB:
- How is the database structured by covering the read and write operations
- Where is memory assigned and how much is expected
Once we understand these concepts better, we will go over how Kafka Streams relates to RocksDB and how we can customize its settings to better fit our applications.
Bio:
TBC
***
💡 Speaker Two:
Danica Fine, Staff Developer Advocate, Confluent
Title of Talk:
The Kafka Consumer: An Unexpected Journey of Data Consumption
Abstract:
Once your data is stored on your Apache Kafka® cluster, the next step is to consume that data and do something interesting with it. Enter: Kafka Consumers. We all know how to set up a Kafka Consumer to poll data… but do you know how a consumer fetches the data from the cluster? Let’s find out!
Every call to consumer.poll() is translated into a low-level request which is sent along to the brokers for fulfillment. In this session, we’ll join Kafka Consumers as they embark on their epic adventure to consume your data. First, see how these clients band together in a single fellowship and follow the guidance of their consumer group coordinator. Then, follow a request from an initial call to poll(), all the way to disk, and back to the client with your data via the broker’s final response. Along the way, we’ll explore a number of client and broker configurations that affect how these requests are handled and discuss the metrics that you can monitor to keep track of every stage of the consumer life cycle.
By the end of this session, you’ll know the ins and outs of your Kafka Consumer requests, making your next debugging or performance analysis session a breeze.
Bio:
Danica Fine is a Staff Developer Advocate at Confluent where she helps others get the most out of their event-driven pipelines. Prior to this role, she served as a software engineer on a streaming infrastructure team at Bloomberg where she predominantly worked on Kafka Streams- and Kafka Connect-based projects. Her expertise in streaming systems has taken her to a number of conferences and speaking engagements over the years, giving her the chance to express her love of Kafka to anyone who will listen. Danica is committed to increasing diversity in the technical community and actively serves as a mentor to a number of women in tech. She can be found on Twitter, tweeting about tech, plants, and baking @TheDanicaFine.
***
DISCLAIMER
BY ATTENDING THIS EVENT IN PERSON, you acknowledge that risk includes possible exposure to and illness from infectious diseases including COVID-19, and accept responsibility for this, if it occurs.
NOTE: We are unable to cater for any attendees under the age of 18.
***
If you would like to speak or host our next event please let us know! community@confluent.io

IN-PERSON: Apache Kafka® Meetup Kruibeke, Belgium - Oct 2024