

About us
New to Apache Kafka®? Start with these free resources: https://cnfl.io/learn-ak-mu
This is an open community - if you want to present, host or contribute in other ways follow this link (http://cnfl.io/get-involved-mu) - first time speakers welcome!
This meetup is for your fellow event streaming enthusiasts!
The topics discussed at our events are all about event streaming, including Confluent Platform, Confluent Cloud, Apache Kafka®, Kafka Connect, streaming data pipelines, ksqlDB, Kafka Streams as well as stream processing, Security, Microservices and a lot more!!
Code of conduct: https://cnfl.io/code-of-conduct-welcome
Beyond this group, we also have the following resources to help you learn and develop your skills! See them here:
*The Meetup Hub*
Find recordings of previous meetups around the world and see upcoming dates for many more at the Meetup Hub
https://cnfl.io/meetup-hub-desc
*Ask The Community:*
-Forum;
This is a place for all the community to ask the tough questions, share knowledge and win badges :D http://cnfl.io/forum-desc
-Slack;
Join tens of thousands of community members in this community cross-collaboration tool, exchanging thousands of messages every month:
cnfl.io/slack
*Confluent Community Catalysts*
Nominate the next Community Catalysts (MVPs) and find out more here:
*Confluent Training and Certification discounts!*
Learn Apache Kafka® and become Confluent Certified (with 20% off your certification exam with the code MU2021CERT): https://cnfl.io/train-cert
--
Also here’s a gift: Get $200 worth of free Confluent Cloud usage every month for your first 3 months; (that could be $600 worth, without spending a single penny) (Ts & Cs apply) http://cnfl.io/mu-try-cloud
If you’re already a user, you can get an extra $60 on top with the code: CC60COMM
Head to http://cnfl.io/get-involved-mu if you have any questions, ideas, concerns or if you want to contribute in some way!
Apache Kafka®, Kafka, and associated open source project names are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse, or review the materials provided here or at any of our Meetups.
Upcoming events
1

Real-time flight data streaming, anomalies, predictions, and visualizations
Improving (formerly YoppWorks), 171 E Liberty St #235, M6K 3P6, Toronto, ON, CAJoin us for a hands-on workshop by Sandon Jacobs on Monday, May 25th from 6:00pm hosted by Improving!
📍Venue:
Improving Office
171 East Liberty St
Unit 235
Toronto, Ontario M6K 3P6Directions (171 E Liberty St - Suite 235)
By transit
Streetcars 504 and 509 both travel close to the office (less than 10 minute walk to the office from either), the lakeshore GO train is also a 5 minute walk from the office.
By car/parking
On street parking is available - there are a handful of paid parking spots directly in front of the entrance - with a large city parking lot across the street.
Entrance
The entrance to the office is beside the Bulk Barn entrance facing Hannah Street. There is an Improving logo on the door.🗓 Agenda:
- 6:00pm – 6:30pm: Welcome, Food/Drinks & Networking
- 6:30pm - 8:30 pm: Workshop by Sandon Jacobs
📌 So that Sandon has an idea of audience priorities, please fill in the pre-workshop form when you can (no personal info is collected)
💡 Speaker & Workshop Details:
Sandon Jacobs, Senior Developer Advocate, ConfluentHands-on workshop: real-time flight data streaming, anomalies, predictions, and visualizations
Workshop Overview
In this hands-on session, we'll build a real-time analytics pipeline using flight data and modern streaming tools — step by step, and without hand-waving.
We'll start by streaming data into Apache Kafka using Confluent Cloud, process it with Apache Flink, and land it in Apache Iceberg using Tableflow. From there, we'll query the data with Trino and turn it into dashboards using Superset. Everything runs in a practical setup you can reproduce later, including Docker and cloud-managed services.
Along the way, we'll also look at anomaly detection and simple predictions using built-in functions, so you can see how real-time insights can be added without building custom ML pipelines or complex models.
This session is all about doing, not slides. You'll see how the pieces actually fit together, what decisions matter in practice (regions, credentials, formats), and how to go from streaming data to analytics in a way that scales.If you're curious about real-time analytics, Kafka + Flink in the cloud, or how streaming, analytics, and lightweight predictive use cases come together in the same pipeline, this workshop will give you a clear, working mental model — and code you can take home.
No prior experience with Kafka, Flink, or Iceberg required! Just bring your curiosity and a laptop!Technical Prerequisites:
To make sure you can get hands on during this workshop, please make sure the following are installed on your system (Make sure your bring your laptops!)1. GitHub Account
- Zero Install (Recommended): Use GitHub Codespaces or open in Dev Container - everything pre-installed!
2. Local Setup: Install the tools below on your machine (takes ~10 minutes)
- VSCode with Confluent Extension: For accessing Confluent Cloud resources.
- Confluent CLI: To interact with Kafka clusters and topics.
- Install DuckDB: For querying Tableflow Iceberg tables.
3. Correctly setting up your Confluent Cloud account
#####
This step is optional, as we will go through how to set up Confluent Cloud during the event. However, if you want to get ahead, make sure you sign up, as so, to make sure you don't have to input your Credit Card:
Use the code 'CONFLUENTDEV1' when you reach the payment methods window after signing up for Confluent Cloud via this link.
[More info about the workshop can be found here](More detail about the workshop, including detailed agenda)***
DISCLAIMER
We don't cater to attendees under the age of 18.
If you want to host or speak at a meetup, please email community@confluent.io2 attendees
Past events
47
