IN PERSON! Apache Kafka® Meetup Bangalore- Jan 2026
Details
Hello everyone! Join us for an IN PERSON Apache Kafka® meetup on Jan 24 from 11:00AM, hosted by New Relic in Bangalore!
📍 Venue:
New Relic
One India Pvt Ltd (Bangalore Office), 2nd floor, Signature, Pyramid Ln, Embassy Golf Links Business Park, Challaghatta, Bengaluru, Karnataka 560071
***
Agenda:
- 11:00 - 11:10: Welcome
- 11:10 - 11:50: Kumar Mallikarjuna, Senior Software Engineer, Confluent
- 11:50 - 12:30: Priyesh Srivastava, Cofounder and CTO, OnFinance
- 12:30 - 12:40: Break
- 12:40 - 13:20: Zameer Fouzan, Lead Developer Relations Engineer, New Relic
- 13:20 - 14:20: Lunch
***
💡 Speaker:
Priyesh Srivastava, Cofounder and CTO, OnFinance
Talk:
Oracle-cli -> a natural language framework for ai resource management with kafka
Abstract:
Agent observability is split between stated business preferences on Slack, Langsmith for trace monitoring, K8s metric server for resource consumption and Inference Provider for token consumption. You need to build a CLI to seamlessly monitor must finish jobs to completion.
-----
💡 Speaker:
Zameer Fouzan, Lead Developer Relations Engineer, New Relic
Talk:
Unlocking Observability in Apache Kafka-Based Systems with OpenTelemetry
Abstract:
Distributed tracing is essential for tracking requests across micro-services. But when it comes to Kafka’s decoupled producers, consumers, and asynchronous processes, tracing a transaction from start to finish isn’t always straightforward.In this talk, we will go th rough how to monitor Kafka-based applications using distributed tracing with OpenTelemetry. By leveraging tools like Jaeger and New Relic, we’ll uncover how to gain a full view of your micro-services, even in the face of Apache Kafka’s asynchronous nature. we’ll also walk through both automatic and manual instrumentation to capture rich telemetry.
-----
💡 Speaker:
Kumar Mallikarjuna, Senior Software Engineer, Confluent
Talk:
Building data pipelines with Kafka and FlinkSQL
Abstract:
Modern applications need to process events real-time, keep their data pipelines simple, reliable, and able to scale with demand. In this session, we’ll explore how to build end-to-end streaming data pipelines using Apache Kafka and Flink SQL. We’ll show how Flink lets you define powerful streaming transformations, aggregations, and ML predictions with SQL.
You’ll learn how to:
- Model topics in Kafka as SQL tables
- Use Flink SQL to join, aggregate, and enrich data on the fly
You’ll leave with a blueprint for turning Kafka events into production-grade, continuously updated data pipelines using Flink SQL.
