VIRTUAL! How to Build a Data Streaming Platform


Details
Hello everyone! Join us for a VIRTUAL Apache Kafka® x Apache Flink® meetup on Dec 10th from 6:00 pm!
Agenda:
- 18:00pm-18:05pm: Introduction & Online Networking
- 18:05pm-18:50pm: Diptiman Raichaudhuri, Staff Developer Advocate, Confluent
- 18:50pm: Q&A
***
Speaker:
Diptiman Raichaudhuri, Staff Developer Advocate, Confluent
Talk:
How to build a data streaming platform - Introduction to Stream Processing, Stream Governance with Kafka and Flink
Abstract:
This session breaks down the components of a data streaming platform and explore Kafka, Flink and Schema Registry. It will also touch upon building a data product for democratizing real time operational insights.
This talk will also talk about Flink taking the centerstage for stream processing use cases and how pyflink is making Flink popular among data streaming engineers.
Bio:
Staff Developer Advocate at Confluent, ex-AWS Data Analytics Specialist, ex-Google Cloud - Data platform specialist. Designed and implemented ‘Modern Data Platform’ for large scale enterprise use cases. Worked at the intersection of Data(Spark, Flink, Kafka, Kinesis, Redshift, Iceberg, Glue, Hive, Neo4j, Neptune) and AI(torch,sagemaker,vertex ai,kubeflow, LLMs) at cloud scale (AWS and Google Cloud).
---
Online Meetup Etiquette:
• Please hold your questions until the end of the presentation or use the zoom chat during!
• Please arrive on time as zoom meetings can become locked for many reasons (though if you get locked out a recording will be available, but you may have to wait a little while for it!)
Important note: If Zoom asks for a password to join please use 'kafka'
----
If you would like to speak or host our next event please let us know! community@confluent.io

VIRTUAL! How to Build a Data Streaming Platform