Skip to content

Jay Kreps and Kai Waehner talk about Kafka

Photo of Alice Richardson
Hosted By
Alice R.
Jay Kreps and Kai Waehner talk about Kafka

Details

Join us for an Apache Kafka meetup on September 7th from 6:00pm, hosted by Volkswagen in Munich. The agenda and speaker information can be found below. See you there!

-----

Please click here to register (https://www.confluent.io/meetup/apache-kafka-munich-sep2017/)if you are interested in attending the meetup! The reason for this is that we are also promoting the meetup outside meetup.com and also we need to keep track of the people who are actually attending so we are aware of room capacity and catering. Thank you so much!

(https://www.confluent.io/meetup/apache-kafka-munich-sep2017/)

-----

Agenda:
6:00pm: Doors open
6:00pm - 6:05pm: Intro by Volkswagen
6:05pm - 6:40pm: Presentation #1: Rise of Streaming Platforms, Jay Kreps, Confluent
6:40pm - 7:15pm: Presentation #2: Highly Scalable Machine Learning in Real Time with Apache Kafka’s Streams API, Kai Waehner, Confluent
7:15pm - 8:15pm: Additional Q&A, Networking, Pizza and Drinks

-----

First Talk

Speaker:
Jay Kreps

Bio:
Jay Kreps is the CEO of Confluent, Inc., a company backing the popular Apache Kafka messaging system. Prior to founding Confluent, he was formerly the lead architect for data infrastructure at LinkedIn. He is among the original authors of several open source projects including Project Voldemort (a key-value store). Apache Kafka (a distributed messaging system) and Apache Samza (a stream processing system).

Title:
The Rise of the Streaming Platform

Abstract:

What happens if you take everything that is happening in your company — every click, every database change, every application log — and make it all available as a real-time stream of well structured data?

Jay will discuss the experience at LinkedIn and elsewhere moving from batch-oriented ETL to real-time streams using Apache Kafka. He’ll talk about how the design and implementation of Kafka was driven by this goal of acting as a real-time platform for event data. Jay will cover some of the challenges of scaling Kafka to hundreds of billions of events per day at Linkedin, supporting thousands of engineers, applications, and data systems in a self-service fashion.

He’ll describe how real-time streams can become the source of ETL into Hadoop or a relational data warehouse, and how real-time data can supplement the role of batch-oriented analytics in Hadoop or a traditional data warehouse.

Jay will also describe how applications and stream processing systems such as Storm, Spark, or Samza can make use of these feeds for sophisticated real-time data processing as events occur.

---

Second Talk

Speaker:
Kai Waehner

Bio:
Kai Waehner works as Technology Evangelist at Confluent. Kai’s main area of expertise lies within the fields of Big Data Analytics, Machine Learning, Integration, Microservices, Internet of Things, Stream Processing and Blockchain. He is regular speaker at international conferences such as JavaOne, O’Reilly Software Architecture or ApacheCon, writes articles for professional journals, and shares his experiences with new technologies on his blog ( http://www.kai-waehner.de/blog) . Contact and references: kontakt@kai-waehner.de / @KaiWaehner / www.kai-waehner.de (http://www.kai-waehner.de/)

Title:
Highly Scalable Machine Learning in Real Time with Apache Kafka’s Streams API

Abstract:
Intelligent real time applications are a game changer in any industry. This session explains how companies from different industries build intelligent real time applications. The first part of this session explains how to build analytic models with R, Python or Scala leveraging open source machine learning / deep learning frameworks like TensorFlow or H2O. The second part discusses the deployment of these built analytic models to your own applications or microservices by leveraging the Apache Kafka cluster and Kafka’s Streams API instead of setting up a new, complex stream processing cluster. The session focuses on live demos and teaches lessons learned for executing analytic models in a highly scalable, mission-critical and performant way.

-----

Special thanks to Volkswagen (http://autogramm.volkswagen.de/12_14/aktuell/aktuell_14.html) who are hosting us for this event.

Don't forget to join our Community Slack Team (https://slackpass.io/confluentcommunity)!

If you would like to speak or host our next event please let us know! community@confluent.io

NOTE: We are unable to cater for any attendees under the age of 18. Please do not sign up for this event if you are under 18.

Photo of Munich Apache Kafka® Meetup by Confluent group
Munich Apache Kafka® Meetup by Confluent
See more events
Volkswagen BigData Lab
Ungererstraße 69, 80805 · München