Skip to content

Details

Hello everyone! Join us for an Apache Kafka® x Apache Flink® meetup on Dec 11th from 5:30PM, hosted by Synapsewerx in Melbourne!

The address, agenda, and speaker information can be found below. See you there!

📍Venue:
Tank Stream Labs, Level 5 & 6/440 Collins St, Melbourne VIC 3000

***
🗓 Agenda:

  • 5:30pm: Doors open
  • 5:30pm - 6:00pm: Pizza, Drinks, and Networking
  • 6:00pm - 6:30pm: Bhargav Kosaraju, Practice Director- Data and Streaming, Synapsewerx
  • 6:30pm - 7:00pm: Stephen Ermann, Senior Customer Success Technical Architect, Confluent
  • 7:00pm - 7:30pm: Additional Q&A & Networking

***
💡 Speaker:
Bhargav Kosaraju, Practice Director- Data and Streaming, Synapsewerx

Bio:
Bhargav possess significant experience in distributed technologies, including Spark, Kafka, Flink, Hadoop, and Kubernetes, and he is skilled in developing tools and systems applications across Big Data and Data Warehouse platforms. His professional background features a strong emphasis on Scala and Java programming, especially in developing Apache Spark applications integrated with traditional Big Data tools. Bhargav has implemented Spark on Kubernetes for large batch processing, built cloud data warehouses using various technologies, and configured and optimised Hadoop and Kafka Clusters across the APAC region. His expertise extends to distributed computing, infrastructure automation, CI/CD pipeline automation, and security in the distributed computing space for both batch and streaming use cases.

Talk:
Dynamic Table Sync in Apache Flink: Evolving Schemas from Kafka to the Data Lake

Abstract:
In this session, we’ll explore how Apache Flink’s Table API can seamlessly synchronize streaming data from Kafka into a data lake while handling evolving schemas via a schema registry.

We’ll walk through how Flink integrates with Kafka’s schema evolution mechanisms, manages schema compatibility, and performs continuous table synchronization with minimal operational overhead.
The talk will include a live demo showcasing the full workflow—reading Kafka messages with evolving schemas, applying Flink’s dynamic table processing, and writing the results to a data lake—supported by runnable code to illustrate the end-to-end implementation.

---
💡 Speaker:
Stephen Ermann, Senior Customer Success Technical Architect, Confluent

Bio:
Stephen is a Senior Customer Success Technical Architect at Confluent where he sees his role as being the proactive side of support for his clients, where he helps them prevent issues, rather than solving them. Before that, he worked in the infrastructure and data fields for Snowflake, Red Hat, Tableau, Dell, Microsoft and at a Swiss private bank.

Talk:
Why timeouts can hurt your throughput, how to handle slow consumers and when to use Queues for Kafka?

Abstract:
When troubleshooting performance issues, we can often trace them back to the various timeout settings on the producer and on the consumer side. Adjusting them will allow you to run your platform as efficiently as possible. But if your architecture includes slow consumers, you might want to consider another tool: the newly released Queues for Kafka (KIP-932)

***
DISCLAIMER
BY ATTENDING THIS EVENT IN PERSON, you acknowledge that risk includes possible exposure to and illness from infectious diseases including COVID-19, and accept responsibility for this, if it occurs.
NOTE: We are unable to cater for any attendees under the age of 18.

***
If you would like to speak or host our next event please let us know! community@confluent.io

Events in Melbourne, AU
Apache Kafka
Big Data
Open Source
Technology
Apache Flink

Members are also interested in