Skip to content

Microservices to Analyze Event Streams &The Art and Science of Capacity Planning

Photo of Alice Richardson
Hosted By
Alice R. and Joe S.
Microservices to Analyze Event Streams &The Art and Science of Capacity Planning

Details

Parking garage entrance is located to the rear building off Shell Blvd. You may park in any unreserved parking stall on all levels of the garage as long as it is not marked as “Retail Parking”. If parked on level 2/3, take the stairs or elevator down to the lobby.

Join us for an Apache Kafka meetup on April 30th from 6:00pm, hosted by Guidewire in Foster City! The address, agenda and speaker information can be found below. See you there!

Agenda:

6:00pm - 6:30pm: Networking, Pizza and Drinks
6:30pm - 7:15pm: Anirudh Ramanathan, Rockset
7:15pm - 8:00pm: Gwen Shapira, Confluent
8:00pm - 8:30pm: Additional Q&A and Networking

-----

First Talk

Speaker:
Anirudh Ramanathan

Title:
Developing Stateful Microservices to Analyze Event Streams in Kafka

Abstract:
Tracking key events and analyzing these event streams are critical to many enterprises. We highlight how organizations are using Kafka as a fast, reliable messaging system alongside Rockset, a serverless search and analytics engine, to create stateful microservices to analyze their event streams.

In this talk, we will discuss a stateful microservices architecture, where events from multiple channels are collected and streamed into Kafka and continuously ingested into Rockset with no explicit schema or metadata specification required. Developers then use serverless compute frameworks, like AWS Lambda, in conjunction with serverless data management from Rockset to build microservices to derive insights on the data from Kafka. Organizations can leverage this pattern to support low-latency queries on event streams, providing immediate insight on their business.

---

Second Talk

Speakers:
Gwen Shapira

Title:
How much Kafka do you need? The art and science of capacity planning

Abstract:
No matter if you are new to Apache Kafka or an experienced expert, capacity planning questions seem to show up at every step of the adoption journey. Architects and developers need to know how many topics, how many partitions, what are the right message sizes and how to plan to scale. Operations teams need to know how many brokers, which hardware and even how many clusters are required. In a short talk that will attempt to cover lots of ground, we’ll go over the basics of planning Kafka deployments.

Bio:
Gwen Shapira is a principal data architect at Confluent, where she helps customers achieve success with their Apache Kafka implementation. She has 15 years of experience working with code and customers to build scalable data architectures, integrating relational and big data technologies. Gwen currently specializes in building real-time reliable data-processing pipelines using Apache Kafka. Gwen is an Oracle Ace Director, the coauthor of Hadoop Application Architectures, and a frequent presenter at industry conferences. She is also a committer on Apache Kafka and Apache Sqoop. When Gwen isn’t coding or building data pipelines, you can find her pedaling her bike, exploring the roads and trails of California and beyond.

-----

Don't forget to join our Community Slack Team! https://launchpass.com/confluentcommunity

Want to speak or host? community@confluent.io

NOTE: Please do not sign up for this event if you are under 18.

Photo of Bay Area Apache Kafka® Meetup group
Bay Area Apache Kafka® Meetup
See more events