4 Meetups and a Conference, session three, this time, it's all about Flink!


Details
Hello Streamers!
This summer, we’d like to cordially invite everyone to join us in the Bay Area for a series of four meetups leading up to Current.
Join us as we gather together on the Confluent Penthouse Patio in Mountain View for some Apache Kafka® (and sometimes Apache Flink®) fun! Space is limited.
- Our journey begins on Thursday, May 30th with a Kafka transaction talk from Justine Olshan of Confluent and a Kafka use case from Andy Han of Uber.
- In June, we will discuss the Kafka ecosystem with Sophie Blee-Goldman of Responsive, Greg Harris of Aiven and Afzal Mazhar of Confluent.
- Moving into July, we will have a Flink meetup with David Anderson of Confluent and Sharon Xie of Decodable.
- And lastly, our farewell to the summer series will be a Kafka and Analytics meetup with Elijah Meeks of Confluent.
A great summer line up of learning and fun that you won’t want to miss!
Please join us for our third meetup in this exciting series, an IN-PERSON Apache Kafka® meetup on Thursday, July 25th from 5:30pm.
Venue:
Confluent Rooftop Patio!
899 W Evelyn Ave
Mountain View, CA
5th floor (In the event of inclement weather, we will meet in the Cloud Cafe, 1st floor directly behind the front desk)
***Please note******: It will be required for all attendees to sign an NDA upon arrival to the meetup**.********
***
🗓 Agenda:
- 5:30pm: Doors Open
- 6:00pm-6:45pm: Sharon Xie, Decodable.
- 6:45pm - 7:30pm: David Anderson, Confluent
- 7:30pm-8:00pm: RAFFLE/Additional Q&A and Networking
***
💡 Speaker for first talk:
Sharon Xie, Founding Engineer, Decodable
Title of Talk:
Timing is Everything-Understanding Event-Time Processing in Flink SQL
Abstract:
In the stream processing context, event-time processing means the events are processed based on when the events occurred, rather than when the events are observed (processing-time) in the system. Apache Flink has a powerful framework for event-time processing, which plays a pivotal role in ensuring temporal order and result accuracy.
In this talk, we will introduce Flink event-time semantics and demonstrate how watermarks as a means of handling late-arriving events are generated, propagated, and triggered using Flink SQL. We will explore operators such as window and join that are often used with event time processing, and how different configurations can impact the processing speed, cost and correctness.
Bio:
Sharon is a founding engineer at Decodable. Currently she leads product management and development. She has over six years of experience in building and operating streaming data platforms, with extensive expertise in Apache Kafka, Apache Flink, and Debezium. Before joining Decodable, she served as the technical lead for the real-time data platform at Splunk, where her focus was on the streaming query language and developer SDKs.
***
💡 Speaker for second talk:
David Anderson, Software Practice Lead, Confluent
Title of Talk:
Flinking enrichment: Shouldn't this be easier?
Abstract:
A common pattern in stream processing applications is to enrich incoming events with contextual information. For example, a fraud detection application might want to enrich incoming transactions with information about the customer, the merchant, previous transactions, etc, as part of building up a feature vector to hand off to a fraud detection model.This talk is an introduction to Apache Flink from the perspective of using it for stream enrichment.If you think enrichment sounds like a join, you're right -- but Apache Flink offers a bewildering array of possibilities for joining streams. With the SQL/Table API, there are the usual left, right, inner, and outer joins that you may be familiar with from your favorite SQL database, to which Flink SQL adds temporal, interval, and lookup joins. Moreover, the DataStream API offers window and interval joins, plus KeyedCoProcessFunctions, BroadcastProcessFunctions, and
AsyncFunctions, all of which can be used to implement custom joins.In this talk we’ll bring structure to this landscape. You’ll learn the most useful techniques for implementing streaming enrichment with Flink, and see concrete examples of when different techniques are appropriate. Along the way you’ll learn something about Flink SQL, the DataStream API, and tricky corner cases with watermarks.
Bio:
David is an Apache Flink® committer and trainer who has helped dozens of companies learn how to work with Flink. Before discovering Flink in 2016, he worked as a data engineer, designing and building data pipelines for companies across Europe.
***
DISCLAIMER
BY ATTENDING THIS EVENT IN PERSON, you acknowledge that risk includes possible exposure to and illness from infectious diseases including COVID-19, and accept responsibility for this, if it occurs.
NOTE: We are unable to cater for any attendees under the age of 21.
***
COVID-19 safety measures

Sponsors
4 Meetups and a Conference, session three, this time, it's all about Flink!