Crypto Streams to AI Predictions: Apache Kafka®, Apache Flink® & Apache Iceberg®
Details
Join us for a hands-on workshop by Sandon Jacobs on Wednesday, November 12th from 5:30pm hosted by AMRoC and Cheetah Byte!
In this workshop, you’ll harness the power of Confluent Cloud - the fully managed data streaming platform built on Apache Kafka®, Apache Flink®, and Apache Iceberg® - to build a live crypto-streaming pipeline that ingests, processes, stores, and predicts real-time data.
📍Venue:
AMRoC Fab Lab
2154 University Square Mall, Tampa, Florida, 33612
🗓 Agenda:
- 5:30pm – 6:15pm: Food/Drinks and networking
- 6:15pm - 6:30 pm: Cheetah Byte and AmRoc Introduction and Prize Raffle
- 6:30pm - 7:30pm: Workshop (pt. 1)
- 7:30pm - 7:40: Break with Raffle
- 7:40pm - 8:30: Workshop (pt. 2)
- Close: Final raffle
📌 So that Sandon has an idea of audience priorities, please fill in the pre-workshop form when you can (no personal info is collected)
💡 Speaker & Workshop Details:
Sandon Jacobs, Senior Developer Advocate, Confluent
From Crypto Streams to AI-Powered Predictions: Predictions
Build Real-Time Intelligence with Confluent’s Data Streaming Platform, built on Apache Kafka®, Apache Flink®, and Apache Iceberg®.
Workshop Overview
In this 2-hour hands-on workshop, you'll build an end-to-end streaming analytics pipeline that captures live cryptocurrency prices, processes them in real-time, and uses AI to forecast the future.
You will start by ingesting a live data feed of crypto data (courtesy of Coingecko Rest API) into Apache Kafka using Kafka Connect. Then tame that chaos with Apache Flink's stream processing superpowers. Next, we'll "freeze" those streams into queryable Apache Iceberg tables using Tableflow. Finally, we'll try to predict the future by using Flink's built-in AI capabilities to analyze historical patterns and forecast where prices might head next. No prior experience with Kafka, Flink, or Iceberg required! Just bring your curiosity and a laptop!
What You'll Learn
- How to set up and manage a Kafka cluster in Confluent Cloud
- Build and deploy Flink SQL jobs for real-time analytics
- Convert streams to query-ready Iceberg tables with Tableflow
- Run analytics in DuckDB
- Apply AI/ML forecasting directly inside your Flink pipeline
What You'll Build
- Stream live crypto data into Kafka using Kafka Connect
- Transform and enrich data in motion using Flink’s streaming queries
- “Freeze” those flowing insights into Iceberg tables via Tableflow
- Query and analyze it all in DuckDB
- Use Flink AI to forecast price trends
Technical Prerequisites:
To make sure you can get hands on during this workshop, please make sure the following are installed on your system (Make sure your bring your laptops!)
1. GitHub Account
- Zero Install (Recommended): Use GitHub Codespaces or open in Dev Container - everything pre-installed!
2. Local Setup: Install the tools below on your machine (takes ~10 minutes)
- VSCode with Confluent Extension: For accessing Confluent Cloud resources.
- Confluent CLI: To interact with Kafka clusters and topics.
- Install DuckDB: For querying Tableflow Iceberg tables.
3. Correctly setting up your Confluent Cloud account
#####
This step is optional, as we will go through how to set up Confluent Cloud during the event. However, if you want to get ahead, make sure you sign up, as so, to make sure you don't have to input your Credit Card:
Use the code 'CONFLUENTDEV1' when you reach the payment methods window after signing up for Confluent Cloud via this link.
***
DISCLAIMER
We don't cater to anyone under the age of 21.
If you are interested in providing a talk/hosting a future meetup, please email community@confluent.io
