

What we’re about
New to Apache Kafka®? Start with these free resources: https://cnfl.io/learn-ak-mu
This is an open community - if you want to present, host or contribute in other ways follow this link (http://cnfl.io/get-involved-mu) - first time speakers welcome!
This meetup is for your fellow event streaming enthusiasts!
The topics discussed at our events are all about event streaming, including Confluent Platform, Confluent Cloud, Apache Kafka®, Kafka Connect, streaming data pipelines, ksqlDB, Kafka Streams as well as stream processing, Security, Microservices and a lot more!!
Code of conduct: https://cnfl.io/code-of-conduct-welcome
Beyond this group, we also have the following resources to help you learn and develop your skills! See them here:
*The Meetup Hub*
Find recordings of previous meetups around the world and see upcoming dates for many more at the Meetup Hub
https://cnfl.io/meetup-hub-desc
*Ask The Community:*
-Forum;
This is a place for all the community to ask the tough questions, share knowledge and win badges :D http://cnfl.io/forum-desc
-Slack;
Join tens of thousands of community members in this community cross-collaboration tool, exchanging thousands of messages every month:
cnfl.io/slack
*Confluent Community Catalysts*
Nominate the next Community Catalysts (MVPs) and find out more here:
*Confluent Training and Certification discounts!*
Learn Apache Kafka® and become Confluent Certified (with 20% off your certification exam with the code MU2021CERT): https://cnfl.io/train-cert
--
Also here’s a gift: Get $200 worth of free Confluent Cloud usage every month for your first 3 months; (that could be $600 worth, without spending a single penny) (Ts & Cs apply) http://cnfl.io/mu-try-cloud
If you’re already a user, you can get an extra $60 on top with the code: CC60COMM
Head to http://cnfl.io/get-involved-mu if you have any questions, ideas, concerns or if you want to contribute in some way!
Apache Kafka®, Kafka, and associated open source project names are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse, or review the materials provided here or at any of our Meetups.
 
Upcoming events
2

Crypto Streams to AI Predictions: Apache Kafka®, Apache Flink® & Apache Iceberg®
AMRoC Fab Lab, 2149 University Square Mall, Tampa, fl, USJoin us for a hands-on workshop by Sandon Jacobs on Wednesday, November 12th from 5:30pm hosted by AMRoC and Cheetah Byte!
In this workshop, you’ll harness the power of Confluent Cloud - the fully managed data streaming platform built on Apache Kafka®, Apache Flink®, and Apache Iceberg® - to build a live crypto-streaming pipeline that ingests, processes, stores, and predicts real-time data.
📍Venue:
AMRoC Fab Lab
2154 University Square Mall, Tampa, Florida, 33612🗓 Agenda:
- 5:30pm – 6:15pm: Food/Drinks and networking
 - 6:15pm - 6:30 pm: Cheetah Byte and AmRoc Introduction and Prize Raffle
 - 6:30pm - 7:30pm: Workshop (pt. 1)
 - 7:30pm - 7:40: Break with Raffle
 - 7:40pm - 8:30: Workshop (pt. 2)
 - Close: Final raffle
 
📌 So that Sandon has an idea of audience priorities, please fill in the pre-workshop form when you can (no personal info is collected)
💡 Speaker & Workshop Details:
Sandon Jacobs, Senior Developer Advocate, ConfluentFrom Crypto Streams to AI-Powered Predictions: Predictions
Build Real-Time Intelligence with Confluent’s Data Streaming Platform, built on Apache Kafka®, Apache Flink®, and Apache Iceberg®.Workshop Overview
In this 2-hour hands-on workshop, you'll build an end-to-end streaming analytics pipeline that captures live cryptocurrency prices, processes them in real-time, and uses AI to forecast the future.You will start by ingesting a live data feed of crypto data (courtesy of Coingecko Rest API) into Apache Kafka using Kafka Connect. Then tame that chaos with Apache Flink's stream processing superpowers. Next, we'll "freeze" those streams into queryable Apache Iceberg tables using Tableflow. Finally, we'll try to predict the future by using Flink's built-in AI capabilities to analyze historical patterns and forecast where prices might head next. No prior experience with Kafka, Flink, or Iceberg required! Just bring your curiosity and a laptop!
What You'll Learn
- How to set up and manage a Kafka cluster in Confluent Cloud
 - Build and deploy Flink SQL jobs for real-time analytics
 - Convert streams to query-ready Iceberg tables with Tableflow
 - Run analytics in DuckDB
 - Apply AI/ML forecasting directly inside your Flink pipeline
 
What You'll Build
- Stream live crypto data into Kafka using Kafka Connect
 - Transform and enrich data in motion using Flink’s streaming queries
 - “Freeze” those flowing insights into Iceberg tables via Tableflow
 - Query and analyze it all in DuckDB
 - Use Flink AI to forecast price trends
 
Technical Prerequisites:
To make sure you can get hands on during this workshop, please make sure the following are installed on your system (Make sure your bring your laptops!)1. GitHub Account
- Zero Install (Recommended): Use GitHub Codespaces or open in Dev Container - everything pre-installed!
 
2. Local Setup: Install the tools below on your machine (takes ~10 minutes)
- VSCode with Confluent Extension: For accessing Confluent Cloud resources.
 - Confluent CLI: To interact with Kafka clusters and topics.
 - Install DuckDB: For querying Tableflow Iceberg tables.
 
3. Correctly setting up your Confluent Cloud account
#####
This step is optional, as we will go through how to set up Confluent Cloud during the event. However, if you want to get ahead, make sure you sign up, as so, to make sure you don't have to input your Credit Card:
Use the code 'CONFLUENTDEV1' when you reach the payment methods window after signing up for Confluent Cloud via this link.
***
DISCLAIMER
We don't cater to anyone under the age of 21.
If you are interested in providing a talk/hosting a future meetup, please email community@confluent.io9 attendees
Building Event-Driven Microservices with Spring Boot, Apache Kafka® and Kotlin
TEKsystems, 4890 W. Kennedy Blvd #740, Tampa, fl, USIMPORTANT PLEASE RSVP @ https://www.meetup.com/tampa-jug/events/311519483/?eventOrigin=group_upcoming_events
***
Abstract
So, I hear you’re developing Spring applications and microservices. Along comes event streaming with Apache KafkaⓇ and you need to integrate. As fate would have it, Spring and Kafka are already pretty good friends. This means you can leverage your organization’s expertise in building, testing, deploying, and monitoring Spring applications, while also reaping the benefits of event-driven design.But why bore ourselves with yet another Java microservice. Kotlin is a first class citizen of the Spring framework. It’s proven itself as a popular language with constructs to simplify JVM-based development. And not just for cross-platform development, but for server-side implementations with frameworks like Spring, Ktor, and Micronaut - just to name a few.
In this session, I’ll walk you through writing a solution in Kotlin for producing and consuming Kafka events using Spring Kafka. We’ll highlight the Spring configuration involved in binding our application to a Kafka cluster in Confluent Cloud. We’ll use structured data - serialized with Apache Avro® - whose schemas are managed and governed by a schema registry.When we’re done, you’ll be ready to explore this Spring-Kafka-Kotlin friendship for yourself.
***
If you want to speak at or host a meetup, please email community@confluent.io1 attendee
Past events
30
