
What we’re about
This group is for people interested in Apache Kafka, stream processing, data platforms and ecosystem. If you are interested in presenting at a future event, please fill out the following form https://forms.gle/KmtmZZYn1TVJMNAk9
Upcoming events (1)
See all- IN PERSON! Apache Kafka® Meetup Bangalore- Aug 2025Nutanix Technologies India Pvt Ltd, Bengaluru
Hello everyone! Join us for an IN PERSON Apache Kafka® meetup on Aug 23rd from 11:00AM, hosted by Nutanix in Bangalore!
📍 Venue:
Nutanix Technologies India Pvt Ltd
9th, MERCURY BLOCK, PRESTIGE TECH PARK, Marathahalli - Sarjapur Outer Ring Rd, Marathahalli, Kadubeesanahalli, Bengaluru, Karnataka 560103IMPORTANT:
Please make sure to fill out this form for security purposes to enter the venue***
Agenda:- 11:00 - 11:10: Welcome
- 11:10 - 11:35: Uttam Agarwal, Engineering Leader, Nutanix & Bhupesh Soni, MTS 3, Nutanix
- 11:35 - 12:15: Parth Agarwal, Senior Software Engineer, Confluent
- 12:15 - 12:30: Break
- 12:30 - 13:10: TBC
- 13:10 - 14:10: Lunch
***
💡 Speaker:
Uttam Agarwal, Engineering Leader, Nutanix & Bhupesh Soni, MTS 3, NutanixTalk:
Kafka Resequencer – Reordering the ChaosAbstract:
In the world of distributed systems, events rarely arrive in perfect order. Network delays, retries, and system hiccups can cause out-of-order events, leading to inaccurate downstream processing.
To address this, we built a Kafka-based resequencing mechanism using the Kafka Streams Processor API. This robust solution:- Uses RocksDB as a persistent state store
- Leverages Kafka Streams' built-in scheduler for time-based buffering
- Applies versioning logic to reorder events correctly
- Ensures eventual consistency with low latency
- Supports high-throughput and seamless scalability
Whether you're building real-time analytics, transaction pipelines, or event-driven applications, this architecture provides a resilient and efficient path to handle out-of-order data.
-----
💡 Speaker:
Parth Agarwal, Senior Software Engineer, ConfluentTalk:
Metrics pipeline using Kafka and FlinkAbstract:
Confluent's Observability Pipeline for handling metrics leverages Apache Kafka to process telemetry and operational metrics at massive scale. This talk offers a behind-the-scenes look at how Kafka powers real-time observability and analytics for Confluent Cloud, enabling features like monitoring, billing, alerting, and data science across distributed environments. The session will cover architecture patterns, consumption via Druid and MetricsAPI, advanced pre-aggregation and streaming via Flink, practical wins and challenges at scale. Pre-requisites: Basic understanding of Kafka and observability concepts, like metrics.-----
💡 Speaker:
TBCTalk:
TBCAbstract:
TBC