IN PERSON! Apache Kafka® Meetup Bangalore- Dec 2025
Details
Hello everyone! Join us for an IN PERSON Apache Kafka® meetup on Dec 13th from 11:00AM, hosted by Giniminds in Bangalore!
📍 Venue:
Giniminds Solutions Private Limited
738, Sumo Sapphire,
3rd Floor, 15th Cross Road,
J P Nagar 6th Phase,
Bangalore - 560078
***
Agenda:
- 11:00 - 11:10: Welcome
- 11:10 - 11:50: Manohar Reddy, Senior Data Streaming Solution Specialist, Giniminds
- 11:50 - 12:30: Irtebat Shaukat, Senior Consulting Engineer, Confluent
- 12:30 - 12:40: Break
- 12:40 - 13:20: Vivek Sinha, Product Manager, Startree
- 13:20 - 14:30: Lunch
***
💡 Speaker:
Manohar Reddy, Senior Data Streaming Solution Specialist, Giniminds
Talk:
Kafka + Flink as the Digital Nervous System for BFSI: Real-Time Decisions at Scale
Abstract:
Banks and insurers are racing to transition from overnight batch processing to real-time intelligence, yet legacy cores and fragmented integrations remain major blockers. This talk demonstrates how Apache Kafka and Apache Flink together form the “digital nervous system” for BFSI, Kafka as the durable event backbone and Flink as the stateful stream processing engine powering fraud detection, risk scoring, and hyper-personalized experiences.
-----
💡 Speaker:
Irtebat Shaukat, Senior Consulting Engineer, Confluent
Talk:
Leveraging CDC and Kafka to Build Reliable Replication Pipelines
Abstract:
This session is a fast walkthrough of how CDC pipelines built on Kafka behave in real systems, followed by an extended Q&A. I will cover the essentials of CDC, failover effects, and why read-after-consistency is difficult across databases such as Postgres, MySQL, and Cassandra. Using real failure scenarios—missing WAL/binlogs, replica promotion gaps, and downstream sink inconsistencies. Most of the time will be open Q&A focused on architectural tradeoffs, operational pitfalls, and patterns that ensure consistent reads across distributed services.
-----
💡 Speaker:
Vivek Sinha, Product Manager, Startree
Talk:
Low latency serving on Iceberg with Apache Pinot
Abstract:
Modern observability and clickstream systems are data firehoses: RUM, clickstream, and APM events stream in via Kafka, land in Apache Iceberg on the data lake, and then mostly sit there. Teams still need sub-second queries for dashboards, funnels, and drill-downs, but often end up spinning up yet another datastore, duplicating data, and maintaining a jungle of pipelines and ETL jobs.
This talk is about closing that gap with a serving layer on top of Iceberg, powered by Apache Pinot. I’ll walk through how clickstream and observability data flows from Kafka → Iceberg → Pinot, with Iceberg as the long-term source of truth and Pinot as the low-latency serving engine for real-time workloads.
We’ll look at key technical ideas that make this practical: Pinot’s indexing and pruning for selective reads, parallel prefetching of Iceberg data over object storage (like S3), and how to think about local vs. remote storage to balance cost and latency.
