IN PERSON: Apache Kafka® Meetup Amsterdam - October 2024


Details
Join us for an Apache Kafka® meetup on Thursday, October 3rd from 6:00pm in Amsterdam hosted by ING!
📍Venue:
ING Cedar
Bijlmerdreef 106, 1102 CT Amsterdam, Netherlands
**IMPORTANT: Please fill out this form 48 hours prior to the start of the event on October 3rd.**
🗓 Agenda:
- 6:00pm: Doors open
- 6:00pm – 6:30pm: Food/Drinks and networking
- 6:30pm - 7:30pm: Tim van Baarsen, Software Engineer, ING Bank & Kosta Chuturkov, Software Engineer, ING Bank
- 7:45pm - 8:30pm: Maria Berinde-Tampanariu, Staff Solutions Engineer, Confluent
- 8:30pm - 9:15pm: Drinks/Snacks, Additional Q&A and Networking
💡 Speaker One:
Tim van Baarsen, Software Engineer, ING Bank & Kosta Chuturkov, Software Engineer, ING Bank
Title of Talk:
Evolve your schemas in a better way! A deep dive into Avro schema compatibility and Schema Registry
Abstract:
The only constant in life is change! The same applies to your Kafka events flowing through your streaming applications.
The Confluent Schema Registry allows us to control how schemas can evolve over time without breaking the compatibility of our streaming applications. But when you start with Kafka and (Avro) schemas, this can be pretty overwhelming.
Join Kosta and Tim as we dive into the tricky world of backward and forward compatibility in schema design. During this deep dive talk, we are going to answer questions like:
- What compatibility level to pick?
- What changes can I make when evolving my schemas?
- What options do I have when I need to introduce a breaking change?
- Should we automatically register schemas from our applications? Or do we need a separate step in our deployment process to promote schemas to higher-level environments?
- What to promote first? My producer, consumer or schema?
- How do you generate Java classes from your Avro schemas using Maven or Gradle, and how to integrate this into your project(s)?
- How do you build an automated test suite (unit tests) to gain more confidence and verify you are not breaking compatibility? Even before deploying a new version of your schema or application.
With live demos, we'll show you how to make schema changes work seamlessly. Emphasizing the crucial decisions, using real-life examples, pitfalls and best practices when promoting schemas on the consumer and producer sides.
Explore the ins and outs of Apache Avro and the Schema Registry with us! Start evolving your schemas in a better way today!
Bios:
Tim van Baarsen is a creative software developer at ING Bank in the Netherlands and has been in the software development business for almost 15 years. He is a strong proponent of open-source technology and has been a big fan of Spring Framework and Apache Kafka since the early versions. His interests lie in building scalable distributed systems. Tim enjoys speaking about his passion for the Spring ecosystem & Apache Kafka at international conferences, internal ING events, and meetups within the Netherlands.
Kosta Chuturkov is an accomplished software engineer with a passion for building high-performance and scalable applications. With over 8 years of experience in the industry, he has become an expert in developing multi-region systems that can handle high throughput workloads.
Kosta joined the ING Eventbus Kafka team in 2020, and since then, he has been a driving force behind their successful implementation of Kafka in the bank. His work has contributed to ING's ability to process millions of transactions every day with low latency and high availability. Kosta's focus on quality and best practices has also made him an invaluable resource for coaching and mentoring other developers.
Beyond his technical expertise, Kosta is deeply committed to agile methodologies and is passionate about improving the way development teams work. He has successfully led many teams in adopting agile practices and has helped them to become more efficient, productive, and collaborative.
💡 Speaker Two:
Maria Berinde-Tampanariu, Staff Solutions Engineer, Confluent
Title of Talk:
Which Data Format Should I Use in Apache Kafka?
Abstract:
You’re just starting your data streaming journey or are extending it to include Schema Registry and you learned that multiple serialization formats are supported. How do you know which one(s) to choose? What if the choice is not yours? If you are offering the data streaming service as a platform to your internal customers, what kind of guidance should you offer them when choosing a data format? Is everything which is possible advisable? What implications does mixing data formats have? How do different formats differ from each other in terms of limitations, evolution and linking options? How does this affect consumers?
In this talk I will share questions, answers and tradeoffs as identified while working with customers who were implementing Confluent Schema Registry.

IN PERSON: Apache Kafka® Meetup Amsterdam - October 2024