Skip to content

IN-PERSON: Apache Kafka® x Apache Flink® x Grafana x Apache Pinot™ Meetup

Photo of Alice Richardson
Hosted By
Alice R.
IN-PERSON: Apache Kafka® x Apache Flink® x Grafana x Apache Pinot™ Meetup

Details

Join us for an Apache Kafka® x Apache Flink® x Grafana x Apache Pinot™ Meetup on Tuesday, April 1st from 5:30pm hosted by Improving!

📍Venue:
222 Riverside Plaza, 15th floor (above Union Train Station), Chicago IL

IMPORTANT: BRING YOUR ID, IT IS REQUIRED TO ENTER THE BUILDING. FIRST & LAST NAMES ALSO FOR SECURITY PURPOSES.
Participants will need to bring their ID for building access. They will check in at the Improving table in the lobby and then head up to the 15th floor.

IF YOU RSVP HERE, YOU DO NOT HAVE TO RSVP ON CHICAGO KAFKA MEETUP PAGE.

🗓 Agenda:

  • 5:30pm - 6:00pm: Welcome & Networking (incl. food/drinks)
  • 6:00pm - 6:30pm: Neil Buesing, CTO and co-founder, Kinetic Edge
  • 6:30pm - 7:00pm: Joe Desmond, Senior Technical Trainer, Confluent
  • 7:00pm - 7:30pm: Ben Lumbert, Sr. Alliances Solutions Architect, StarTree
  • 7:30pm - 8:00pm: Q&A + Networking

💡 Speaker One:
Neil Buesing, CTO and co-founder, Kinetic Edge

Title of Talk:
Monitoring Kafka Clients with Grafana and Prometheus

Abstract:
Monitoring Apache Kafka Clients with Prometheus and Grafana can be challenging; come and find out ways to add visibility to your Kafka-based applications so you are not caught off guard when something is wrong.
Key questions to be answered:
* KIP-714 Client Metrics and Observability - How can you leverage brokers to capture your client metrics so you don't have to scrape metrics from your clients directly?
* When you do want to scrape your Java-based client metrics, do you want to use the Prometheus Java Agent or Micrometer? How do you format your metric conventions to minimize churn, if you pivot and move to the other?
* Building dashboards takes time, how can I help you build them more quickly?
* What about OpenTelemetry metrics; is there something available in Grafana that I could use?
These questions, and many more, will be covered. Dashboards can be very helpful (and a lot of fun)!

Bio:
Neil is the CTO and co-founder of Kinetic Edge. He has led software projects in a variety of industries, including e-commerce, insurance, and healthcare. His passion is helping others updating their real-time event streaming, and he enjoys sharing about many aspects of the Kafka ecosystem.

💡 Speaker Two:
Joe Desmond, Senior Technical Trainer, Confluent

Title of Talk:
Apache Kafka® and Apache Flink®, a Data Stream Processing Powerhouse Duo

Abstract:
Apache Kafka® and Apache Flink® are a powerful combination for real-time data streaming. Kafka provides a highly scalable and durable infrastructure for data streaming at any scale, while Flink offers massively scalable, in-memory stateful processing across both bounded and unbounded data streams. Together, they enable lightning-fast consume, process, and produce capabilities, providing real-time data insights that deliver significant value to customers around the world.

## In this talk, we will:

  • Provide an introduction overview of Apache Kafka® and Apache Flink®
  • Provide an introduction to streams processing with Apache Kafka®
  • Talk about the synergy between Apache Kafka® and Apache Flink®
  • Discuss the ING Bank fraud detection use case to showcase the powerhouse duo at work in the real world.

Bio:
Joe Desmond, currently a Senior Technical Instructor in Confluent's Professional Services and Education organization, has 28 years of experience providing operations and technology training to Fortune 1000 companies globally. He has collaborated with numerous organizations, including Cerner, BT, VMware, AWS, and Tricentis. This is Joe's second tenure as a Sr. Technical Instructor at Confluent.

💡 Speaker Three:
Ben Lumbert, Sr. Alliances Solutions Architect, StarTree

Title of Talk:
The “KFP” Stack: A scalable, end-to-end, open source solution for real-time data pipelines

Abstract:
In today’s on-demand economy there is a growing need to be able to produce, process, serve, and visualize real-time data in user-facing applications. Maintaining ultra low latency and stability as adoption increases is essential to keeping users happy and engaged. The “KPF” stack (Kafka + Flink + Pinot) provides everything you need to develop a real-time data pipeline that scales to meet the demands of your applications as they grow.
In this talk, we will:
- Review the components of a real-time data pipeline
- Discuss the “KFP” stack and the role of each technology in the pipeline
- Take a deeper look into Apache Pinot and how it's optimized for real-time data
- Walk through building a simple user-facing gaming dashboard using the “KFP” stack and Streamlit.
Understanding Kafka is a solid start to working with real-time data, but it’s only one piece of the puzzle. The “KPF” stack is a great option for understanding the whole picture!

Bio:
Ben is a Senior Solutions Architect on the Alliances team at StarTree and has been working in the data and analytics space for over 20 years across a variety of industries. He is passionate about collaborating with technology vendors to build integrations that streamline data-driven workloads with a recent focus on real-time analytics and Apache Pinot.

***
DISCLAIMER
We do not cater to those under the age of 21.
If you are interested in hosting/speaking at a future meetup, please email community@confluent.io

Photo of Chicago Apache Flink Meetup (CHAF) group
Chicago Apache Flink Meetup (CHAF)
See more events
Improving
222 Riverside Plaza, 15th floor (above Union Train Station) · Chicago, IL