Past Meetup

Rethinking Stream Processing with Apache Kafka

This Meetup is past

87 people went

Details

BUILDING 2 is the ENTRANCE FOR DELOITTE

Use the RED elevator behind the glass door

Join us for an Apache Kafka meetup on June 13th from 5:45pm, in Berlin. Our guest speaker Michael Noll will be talking about Rethinking Stream Processing with Apache Kafka: Applications vs. Clusters, Streams vs. Databases. The details can be found below. See you there!

-----

****Please click here to register**** if you are interested in attending the meetup! The reason for this is that we are also promoting the meetup outside meetup.com and also we need to keep track of the people of the people who are actually attending so we are aware of room capacity and catering. Thank you so much! ( http://go.confluent.io/2017.06.13-EMEA-Germany-BerlinMeetup_Register.html )

-----

Agenda:
5:45pm: Doors open
6:00pm - 6:45pm: Rethinking Stream Processing with Apache Kafka: Applications vs. Clusters, Streams vs. Databases - Michael Noll, Confluent
6:45pm - 7:45pm: Networking, Pizza and Drinks

-----

Speaker:
Michael Noll

Bio:
Michael Noll is a product manager at Confluent, the company founded by the creators of Apache Kafka. Previously, Michael was the technical lead of DNS operator Verisign’s big data platform, where he grew the Kafka, Hadoop, and Storm-based infrastructure from zero to petabyte-sized production clusters spanning multiple data centers—one of the largest big data infrastructures in Europe at the time. He is a well-known tech blogger in the big data community (www.michael-noll.com). In his spare time, Michael serves as a technical reviewer for publishers such as Manning and is a frequent speaker at international conferences, including ACM SIGIR, ApacheCon, and Strata. Michael holds a PhD in computer science.

Title:
Rethinking Stream Processing with Apache Kafka: Applications vs. Clusters, Streams vs. Databases

Abstract:
Modern businesses have data at their core, and this data is changing continuously. How can we harness this torrent of information in real-time? The answer is stream processing, and the technology that has since become the core platform for streaming data is Apache Kafka. Among the thousands of companies that use Kafka to transform and reshape their industries are the likes of Netflix, Uber, PayPal, and AirBnB, but also established players such as Goldman Sachs, Cisco, and Oracle.

Unfortunately, today’s common architectures for real-time data processing at scale suffer from complexity: there are many technologies that need to be stitched and operated together, and each individual technology is often complex by itself. This has led to a strong discrepancy between how we, as engineers, would like to work vs. how we actually end up working in practice.

In this session we talk about how Apache Kafka helps you to radically simplify your data architectures. We cover how you can now build normal applications to serve your real-time processing needs — rather than building clusters or similar special-purpose infrastructure — and still benefit from properties such as high scalability, distributed computing, and fault-tolerance, which are typically associated exclusively with cluster technologies. We discuss common use cases to realize that stream processing in practice often requires database-like functionality, and how Kafka allows you to bridge the worlds of streams and databases when implementing your own core business applications (inventory management for large retailers, patient monitoring in healthcare, fleet tracking in logistics, etc), for example in the form of event-driven, containerized microservices.

-----

Special thanks to our hosts for this event.

Don't forget to join our Community Slack Team ( https://slackpass.io/confluentcommunity )!

If you would like to speak or host our next event please let us know! [masked]

NOTE: We are unable to cater for any attendees under the age of 18. Please do not sign up for this event if you are under 18.