IN PERSON! Apache Kafka® x Apache Flink® Meetup (April 2024)


Details
Hello everyone! Join us for an Apache Kafka® x Apache Flink® meetup on April 17th 5:30pm, hosted by Synapsewerx in Melbourne!
The address, agenda, and speaker information can be found below. See you there!
📍Venue:
Synapsewerx
Level 18, 31 Queen St., Melbourne, 3000, Victoria
***
🗓 Agenda:
- 5:30pm: Doors open
- 5:30pm - 6:00pm: Food, Drinks, and Networking
- 6:00pm - 6:30pm: Bhargav Kosaraju, Senior Solution Architect
- 6:30pm - 7:00pm: Prerna Tiwari, Snr Solution Engineer, Confluent & Stephen Ermann, Senior Customer Success Technical Architect, Confluent
- 7:00pm - 7:30pm: Additional Q&A & Networking
***
💡 Speaker:
Bhargav Kosaraju, Senior Solution Architect
Bio:
Bhargav possess significant experience in distributed technologies, including Spark, Kafka, Flink, Hadoop, and Kubernetes, and he is skilled in developing tools and systems applications across Big Data and Data Warehouse platforms. His professional background features a strong emphasis on Scala and Java programming, especially in developing Apache Spark applications integrated with traditional Big Data tools. Bhargav has implemented Spark on Kubernetes for large batch processing, built cloud data warehouses using various technologies, and configured and optimised Hadoop and Kafka Clusters across the APAC region. His expertise extends to distributed computing, infrastructure automation, CI/CD pipeline automation, and security in the distributed computing space for both batch and streaming use cases.
Talk:
Streamlining Insights: Mastering Real-Time Data with Kafka and Flink
Abstract:
In this session we will discover how Kafka, can seamlessly capture and manage massive streams of data in real-time. Learn how Kafka's robust architecture enables high-throughput, scalable and reliable data pipelines, forming the backbone of your data in-motion ecosystem.
We'll then transition to the power of Apache Flink. Witness firsthand how Flink's stateful computations and time-based windowing can transform raw data streams into meaningful insights. We'll demonstrate Flink's power in complex event processing, providing real-time analytics that are crucial for timely decision making.
Our session includes a live demo where we integrate Kafka and Flink to build an end-to-end streaming application. We'll simulate a real-world scenario, showcasing data ingestion, transformation and aggregation.
------
💡 Speaker:
- Prerna Tiwari, Snr Solution Engineer, Confluent
- Stephen Ermann, Senior Customer Success Technical Architect, Confluent
Bio:
Prerna brings 15 years of experience in the software industry, working across various IT consulting and solutions architecture roles. Prerna acquired extensive experience working on digital transformation, multi-cloud, and application modernisation projects with financial, banking, supply chain, and government organisations in Australia and the Asia Pacific.
Stephen is a Senior Customer Success Technical Architect at Confluent where he helps customers be more effective in how and when they use Kafka and Confluent technologies. Before that, he worked in the infrastructure and data fields for Snowflake, Red Hat, Tableau, Dell and at a Swiss private bank.
Talk:
Data Products, Data Contracts and Change Data Capture
Abstract:
Change Data Capture (CDC) is a method that connects database tables to data streams. However, CDC directly exposes the database's internal data model to downstream consumers, creating tight coupling and potential for significant issues with even minor changes in the data modelTo address these drawbacks, we suggest evolving CDC with first-class data products enforced by data contracts. These data products allow for the creation of reliable, near-real-time event streams that are decoupled from the internal data models, thus isolating systemsWe will discuss the concept of data products, highlighting their components like schemas, metadata, and dedicated ownership. We will explain how we envision streaming data products and how they can be built using Apache Flink SQL and Kafka, highlighting the importance of organising, monitoring, and governing data productsWe will also cover the outbox pattern and the potential future flexibility offered by KIP-939 for Kafka, which would allow direct data writing into streams within transactionsWe will demo the key concepts discussed above in a Flink SQL demo
***
DISCLAIMER
BY ATTENDING THIS EVENT IN PERSON, you acknowledge that risk includes possible exposure to and illness from infectious diseases including COVID-19, and accept responsibility for this, if it occurs.
NOTE: We are unable to cater for any attendees under the age of 18.
***
If you would like to speak or host our next event please let us know! community@confluent.io

IN PERSON! Apache Kafka® x Apache Flink® Meetup (April 2024)