

What we’re about
New to Apache Kafka®? Start with these free resources: https://cnfl.io/learn-ak-mu
This is an open community - if you want to present, host or contribute in other ways follow this link (http://cnfl.io/get-involved-mu) - first time speakers welcome!
This meetup is for your fellow event streaming enthusiasts!
The topics discussed at our events are all about event streaming, including Confluent Platform, Confluent Cloud, Apache Kafka®, Kafka Connect, streaming data pipelines, ksqlDB, Kafka Streams as well as stream processing, Security, Microservices and a lot more!!
Code of conduct: https://cnfl.io/code-of-conduct-welcome
Beyond this group, we also have the following resources to help you learn and develop your skills! See them here:
*The Meetup Hub*
Find recordings of previous meetups around the world and see upcoming dates for many more at the Meetup Hub
https://cnfl.io/meetup-hub-desc
*Ask The Community:*
-Forum;
This is a place for all the community to ask the tough questions, share knowledge and win badges :D http://cnfl.io/forum-desc
-Slack;
Join tens of thousands of community members in this community cross-collaboration tool, exchanging thousands of messages every month:
cnfl.io/slack
*Confluent Community Catalysts*
Nominate the next Community Catalysts (MVPs) and find out more here:
*Confluent Training and Certification discounts!*
Learn Apache Kafka® and become Confluent Certified (with 20% off your certification exam with the code MU2021CERT): https://cnfl.io/train-cert
--
Also here’s a gift: Get $200 worth of free Confluent Cloud usage every month for your first 3 months; (that could be $600 worth, without spending a single penny) (Ts & Cs apply) http://cnfl.io/mu-try-cloud
If you’re already a user, you can get an extra $60 on top with the code: CC60COMM
Head to http://cnfl.io/get-involved-mu if you have any questions, ideas, concerns or if you want to contribute in some way!
Apache Kafka®, Kafka, and associated open source project names are trademarks of the Apache Software Foundation. The Apache Software Foundation has no affiliation with and does not endorse, or review the materials provided here or at any of our Meetups.
Upcoming events
2

Where Human Expertise Meets Structured Automation
Marionete, Av. Casal Ribeiro 63, Lisboa, PTJoin us for an Apache Kafka® meetup on Wednesday, January 22nd from 6:00pm in Lisbon hosted by Marionete!
📍Venue:
Marionete
Praça Duque de Saldanha, Atrium, 9D
You can come into the offices through one of the two side streets:
Av. Casal Ribeiro 63, Atrium, Lisboa, Portugal
Avenida Fontes Pereira de Melo , Atrium, Lisboa, Portugal
If you cannot attend, please change your RSVP so someone else can join! Thank you!
🗓️ Agenda:
- 6:00pm: Doors open
- 6:00pm - 6:30pm: Food, Drinks & Networking
- 6:30pm - 7:15pm: Eduardo Ramos and Lucy Woods, Marionete
- 7:15pm - 8:00pm - Additional Q&A & Networking
💡Speakers:
Eduardo Ramos, Junior Software Engineer, Marionete
Lucy Woods, Junior Software Engineer, Marionet
Title of Talk:
Where Human Expertise Meets Structured Automation
Abstract:
Organizations increasingly rely on expert judgment to process large volumes of complex, high-risk data for tasks such as phishing investigations and fraud reviews. These workflows are often slow, inconsistent, and costly due to their sequential, manual nature and reliance on specialist availability.
We present an alternative approach that uses an event-driven system to break complex expert analysis into smaller, AI-assisted steps executed in a clear, structured sequence. This allows routine parts of the process to be automated while ensuring that human experts remain in control of the final review and decision-making. A proof of concept developed to meet client specifications demonstrates how state-machine orchestration improves consistency, reduces processing time, and standardizes outputs without removing human oversight.
We also discuss the practical challenges encountered during the proof of concept, along with the architectural and operational solutions adopted. Finally, we show how the same human-in-the-loop, state-driven architecture can be adapted to other resilience-critical domains such as cancer screening, financial market analysis, and job-posting fraud detection
Bios:
Eduardo Ramos is a Junior Software Engineer with a Master’s degree in Informatics and Computer Engineering from FEUP (Faculdade de Engenharia da Universidade do Porto). His work focuses on AI-driven systems, cybersecurity, and distributed architectures, with hands-on experience in automating expert workflows, phishing detection, and secure software design. Eduardo has worked across research and applied engineering roles, and has contributed to multiple AI projects, including LLM-assisted security workflows and AI models for automated content translation, summarization, and domain-specific prediction tasks.
Lucy Woods is a Junior Software Engineer at Marionete, specialising in test based coding approaches. She modernised a critical 50 year old test suite before moving to begin her career as a software developer. In her day-to-day work, she works with Kafka/Confluent and Spring Boot.
***
DISCLAIMER
NOTE: We are unable to cater for any attendees under the age of 18.
If you would like to speak or host our next event please let us know! community@confluent.io13 attendees
Crypto Streams to AI Predictions: Apache Kafka®, Apache Flink® & Apache Iceberg®
Marionete, Av. Casal Ribeiro 63, Lisboa, PTJoin us for a hands-on workshop by Olena Kutsenko on Monday, February 16th from 6:00pm hosted by Marionete!
In this workshop, you’ll harness the power of Confluent Cloud - the fully managed data streaming platform built on Apache Kafka®, Apache Flink®, and Apache Iceberg® - to build a live crypto-streaming pipeline that ingests, processes, stores, and predicts real-time data.
📍Venue:
Marionete
Praça Duque de Saldanha, Atrium, 9D
You can come into the offices through one of the two side streets:
Av. Casal Ribeiro 63, Atrium, Lisboa, Portugal
Avenida Fontes Pereira de Melo , Atrium, Lisboa, Portugal
If you cannot attend, please change your RSVP so someone else can join! Thank you!
🗓 Agenda:
- 6:00pm – 6:30pm: Welcome, Food/Drinks & Networking
- 6:30pm - 8:30 pm: Workshop by Olena
📌 So that Olena has an idea of audience priorities, please fill in the pre-workshop form when you can (no personal info is collected)
💡 Speaker & Workshop Details:
Olena Kutsenko, Staff Developer Advocate, Confluent
From Crypto Streams to AI-Powered Predictions: Predictions
Build Real-Time Intelligence with Confluent’s Data Streaming Platform, built on Apache Kafka®, Apache Flink®, and Apache Iceberg®.
Workshop Overview
In this 2-hour hands-on workshop, you'll build an end-to-end streaming analytics pipeline that captures live cryptocurrency prices, processes them in real-time, and uses AI to forecast the future.
You will start by ingesting a live data feed of crypto data (courtesy of Coingecko Rest API) into Apache Kafka using Kafka Connect. Then tame that chaos with Apache Flink's stream processing superpowers. Next, we'll "freeze" those streams into queryable Apache Iceberg tables using Tableflow. Finally, we'll try to predict the future by using Flink's built-in AI capabilities to analyze historical patterns and forecast where prices might head next. No prior experience with Kafka, Flink, or Iceberg required! Just bring your curiosity and a laptop!
What You'll Learn
- How to set up and manage a Kafka cluster in Confluent Cloud
- Build and deploy Flink SQL jobs for real-time analytics
- Convert streams to query-ready Iceberg tables with Tableflow
- Run analytics in DuckDB
- Apply AI/ML forecasting directly inside your Flink pipeline
What You'll Build
- Stream live crypto data into Kafka using Kafka Connect
- Transform and enrich data in motion using Flink’s streaming queries
- “Freeze” those flowing insights into Iceberg tables via Tableflow
- Query and analyze it all in DuckDB
- Use Flink AI to forecast price trends
Technical Prerequisites:
To make sure you can get hands on during this workshop, please make sure the following are installed on your system (Make sure your bring your laptops!)
1. GitHub Account
- Zero Install (Recommended): Use GitHub Codespaces or open in Dev Container - everything pre-installed!
2. Local Setup: Install the tools below on your machine (takes ~10 minutes)
- VSCode with Confluent Extension: For accessing Confluent Cloud resources.
- Confluent CLI: To interact with Kafka clusters and topics.
- Install DuckDB: For querying Tableflow Iceberg tables.
3. Correctly setting up your Confluent Cloud account
#####
This step is optional, as we will go through how to set up Confluent Cloud during the event. However, if you want to get ahead, make sure you sign up, as so, to make sure you don't have to input your Credit Card:
Use the code 'CONFLUENTDEV1' when you reach the payment methods window after signing up for Confluent Cloud via this link.
[More info about the workshop can be found here](More detail about the workshop, including detailed agenda)
***
DISCLAIMER
We don't cater to attendees under the age of 18.
If you want to host or speak at a meetup, please email community@confluent.io1 attendee
Past events
19

