IN PERSON! Apache Kafka® x Apache Flink® Meetup (October 2024)


Details
Hello everyone! Join us for an Apache Kafka® x Apache Flink® meetup on October 2nd from 5:30pm, hosted by Ippon in Melbourne!
The address, agenda, and speaker information can be found below. See you there!
📍Venue:
Ippon Australia Collab Hub
Level 8, 607 Bourke Street
Melbourne VIC 3000
IMPORTANT:
Please note that for security purposes, all attendees are required to sign in at the lobby to get into the building.
***
🗓 Agenda:
- 5:30pm: Doors open
- 5:30pm - 6:00pm: Pizza, Drinks, and Networking
- 6:00pm - 6:30pm: Paras Sitoula, Technical Lead, Tabcorp
- 6:30pm - 7:00pm: Adi Polak, Director of Advocacy and Developer Experience Engineering, Confluent
- 7:00pm - 7:30pm: Additional Q&A & Networking
***
💡 Speaker:
Paras Sitoula, Technical Lead, Tabcorp
Bio:
Paras is a seasoned software/date engineer with significant experience in developing and optimizing realtime scalable systems. At Tabcorp, Paras leads a high-performing team, developing and optimizing real-time transaction monitoring systems utilizing technologies like Apache Kafka and Neo4j. With a robust background in both real-time and batch data processing, Paras has extensive experience across various platforms and tools, including AWS, Databricks, and Spark, which he leverages to deliver high-performance data solutions.
Talk:
Using DLQ in Kafka
Abstract:
This talk explores how implementing the Dead Letter Queue (DLQ) pattern in Apache Kafka significantly enhances the resilience and dependability of data pipelines. We will delve into the critical role DLQs play in isolating and managing problematic messages, thereby preventing processing halts and maintaining system stability. Attendees will gain insights into the operational flexibility, improved error handling, and streamlined debugging that DLQs offer.
Furthermore, the session includes a live demo showcasing practical approaches to implementing DLQs in Kafka. We will demonstrate how to configure DLQs, handle message retries, and monitor DLQ activity to proactively address issues.
---
💡 Speaker:
Adi Polak, Director of Advocacy and Developer Experience Engineering, Confluent
Bio:
Adi is an experienced software engineer and people manager. For most of her professional life, she dealt with data and machine learning for transactional and analytics workloads by building large-scale systems. As a data practitioner, she developed software to solve real-world problems with Apache Spark, Kafka, HDFS, K8s, AWS, and Azure in high-throughput, high-scale production environments for companies like Akamai and Microsoft. Adi has taught Spark to thousands of students throughout the years and is the author of the successful book — Scaling Machine Learning with Spark. When not thinking up new architecture, teaching new tech or pondering on a distributed systems challenge, you can find her at the local cultural scene.
Talk:
Data-centric AI with Flink SQL
Abstract:
Generative AI is revolutionizing the technology stack, introducing a new era of innovation and automation. Apache Flink is keeping up with this revolution through FLIP-437, which introduces AI models as first-class citizens in Flink SQL, on par with Tables, seamlessly integrating them into the existing resource hierarchy (Catalog->Database->Model). This groundbreaking advancement paves the way for unified data processing and model inference using the familiar SQL interface and positions Flink as a central piece of modern AI architectures..
In this 30-minute session, tailored for developers with experience in stream processing, we’ll start by exploring the rationale behind Flink’s decision to treat AI models as first-class entities within Flink SQL. We’ll then delve into the newly introduced ML_PREDICT and ML_EVALUATE functions, which enable a generic way for model inference and evaluation using Flink SQL. Finally, we’ll demonstrate how to leverage the powerful combination of Flink and Kafka to create an anomaly detection app that harnesses real-time data streams.
By the end of the session, attendees will understand how they can write an end-to-end data app with seamless AI inference integration using **just****** Flink SQL.
***
DISCLAIMER
BY ATTENDING THIS EVENT IN PERSON, you acknowledge that risk includes possible exposure to and illness from infectious diseases including COVID-19, and accept responsibility for this, if it occurs.
NOTE: We are unable to cater for any attendees under the age of 18.
***
If you would like to speak or host our next event please let us know! community@confluent.io

IN PERSON! Apache Kafka® x Apache Flink® Meetup (October 2024)