Real-time flight data streaming, anomalies, predictions, and visualizations
Details
Join us for a hands-on workshop by Sandon Jacobs on Monday, May 25th from 6:00pm hosted by Improving!
📍Venue:
Improving Office
171 East Liberty St
Unit 235
Toronto, Ontario M6K 3P6
Directions (171 E Liberty St - Suite 235)
By transit
Streetcars 504 and 509 both travel close to the office (less than 10 minute walk to the office from either), the lakeshore GO train is also a 5 minute walk from the office.
By car/parking
On street parking is available - there are a handful of paid parking spots directly in front of the entrance - with a large city parking lot across the street.
Entrance
The entrance to the office is beside the Bulk Barn entrance facing Hannah Street. There is an Improving logo on the door.
🗓 Agenda:
- 6:00pm – 6:30pm: Welcome, Food/Drinks & Networking
- 6:30pm - 8:30 pm: Workshop by Sandon Jacobs
📌 So that Sandon has an idea of audience priorities, please fill in the pre-workshop form when you can (no personal info is collected)
💡 Speaker & Workshop Details:
Sandon Jacobs, Senior Developer Advocate, Confluent
Hands-on workshop: real-time flight data streaming, anomalies, predictions, and visualizations
Workshop Overview
In this hands-on session, we'll build a real-time analytics pipeline using flight data and modern streaming tools — step by step, and without hand-waving.
We'll start by streaming data into Apache Kafka using Confluent Cloud, process it with Apache Flink, and land it in Apache Iceberg using Tableflow. From there, we'll query the data with Trino and turn it into dashboards using Superset. Everything runs in a practical setup you can reproduce later, including Docker and cloud-managed services.
Along the way, we'll also look at anomaly detection and simple predictions using built-in functions, so you can see how real-time insights can be added without building custom ML pipelines or complex models.
This session is all about doing, not slides. You'll see how the pieces actually fit together, what decisions matter in practice (regions, credentials, formats), and how to go from streaming data to analytics in a way that scales.
If you're curious about real-time analytics, Kafka + Flink in the cloud, or how streaming, analytics, and lightweight predictive use cases come together in the same pipeline, this workshop will give you a clear, working mental model — and code you can take home.
No prior experience with Kafka, Flink, or Iceberg required! Just bring your curiosity and a laptop!
Technical Prerequisites:
To make sure you can get hands on during this workshop, please make sure the following are installed on your system (Make sure your bring your laptops!)
1. GitHub Account
- Zero Install (Recommended): Use GitHub Codespaces or open in Dev Container - everything pre-installed!
2. Local Setup: Install the tools below on your machine (takes ~10 minutes)
- VSCode with Confluent Extension: For accessing Confluent Cloud resources.
- Confluent CLI: To interact with Kafka clusters and topics.
- Install DuckDB: For querying Tableflow Iceberg tables.
3. Correctly setting up your Confluent Cloud account
#####
This step is optional, as we will go through how to set up Confluent Cloud during the event. However, if you want to get ahead, make sure you sign up, as so, to make sure you don't have to input your Credit Card:
Use the code 'CONFLUENTDEV1' when you reach the payment methods window after signing up for Confluent Cloud via this link.
[More info about the workshop can be found here](More detail about the workshop, including detailed agenda)
***
DISCLAIMER
We don't cater to attendees under the age of 18.
If you want to host or speak at a meetup, please email community@confluent.io

