Join us for an Apache Kafka meetup on June 28th from 5:00pm, hosted by Ippen Digital (https://www.ippen-digital.de) in Munich. Our guest speakers Michael Noll and Kai Waehner will be talking about Apache Kafka. The details can be found below. See you there!
****Please click here to register**** if you are interested in attending the meetup! The reason for this is that we are also promoting the meetup outside meetup.com and also we need to keep track of the people of the people who are actually attending so we are aware of room capacity and catering. Thank you so much! (http://go.confluent.io/2017.06.28-EMEA-Germany-MunichMeetup_Register.html)
5:00pm: Doors open
5:00pm - 5:30pm: Networking and Entrance (Intro)
5:30pm - 6:30pm: Rethinking Stream Processing with Apache Kafka: Applications vs. Clusters, Streams vs. Databases - Michael Noll, Confluent
6:30pm - 7:30pm: How to Apply Machine Learning Models to Real Time Processing with Apache Kafka Streams Kai Waehner, Confluent
7:30pm - 8:30pm: Pizza, Drinks and Networking
Michael Noll is a product manager at Confluent, the company founded by the creators of Apache Kafka. Previously, Michael was the technical lead of DNS operator Verisign’s big data platform, where he grew the Kafka, Hadoop, and Storm-based infrastructure from zero to petabyte-sized production clusters spanning multiple data centers—one of the largest big data infrastructures in Europe at the time. He is a well-known tech blogger in the big data community (www.michael-noll.com). In his spare time, Michael serves as a technical reviewer for publishers such as Manning and is a frequent speaker at international conferences, including ACM SIGIR, ApacheCon, and Strata. Michael holds a PhD in computer science.
Rethinking Stream Processing with Apache Kafka: Applications vs. Clusters, Streams vs. Databases
Modern businesses have data at their core, and this data is changing continuously. How can we harness this torrent of information in real-time? The answer is stream processing, and the technology that has since become the core platform for streaming data is Apache Kafka. Among the thousands of companies that use Kafka to transform and reshape their industries are the likes of Netflix, Uber, PayPal, and AirBnB, but also established players such as Goldman Sachs, Cisco, and Oracle.
Unfortunately, today’s common architectures for real-time data processing at scale suffer from complexity: there are many technologies that need to be stitched and operated together, and each individual technology is often complex by itself. This has led to a strong discrepancy between how we, as engineers, would like to work vs. how we actually end up working in practice.
In this session we talk about how Apache Kafka helps you to radically simplify your data architectures. We cover how you can now build normal applications to serve your real-time processing needs — rather than building clusters or similar special-purpose infrastructure — and still benefit from properties such as high scalability, distributed computing, and fault-tolerance, which are typically associated exclusively with cluster technologies. We discuss common use cases to realize that stream processing in practice often requires database-like functionality, and how Kafka allows you to bridge the worlds of streams and databases when implementing your own core business applications (inventory management for large retailers, patient monitoring in healthcare, fleet tracking in logistics, etc), for example in the form of event-driven, containerized microservices.
Kai Waehner works as Technology Evangelist at Confluent. Kai’s main area of expertise lies within the fields of Big Data Analytics, Machine Learning, Integration, Microservices, Internet of Things, Stream Processing and Blockchain. He is regular speaker at international conferences such as JavaOne, O’Reilly Software Architecture or ApacheCon, writes articles for professional journals, and shares his experiences with new technologies on his blog ( http://www.kai-waehner.de/blog) . Contact and references: [masked] / @KaiWaehner / www.kai-waehner.de (http://www.kai-waehner.de/)
How to Apply Machine Learning Models to Real Time Processing with Apache Kafka Streams
Big Data and Machine Learning are key for innovation in many industries today. The first part of this session explains how to build analytic models with R, Python or Scala leveraging open source machine learning / deep learning frameworks like Apache Spark, TensorFlow or H2O.ai. The second part discusses the deployment of these built analytic models to your own applications or microservices leveraging the Apache Kafka cluster and Kafka Streams. The session focuses on live demos and teaches lessons learned for executing analytic models in a highly scalable and performant way.
Special thanks to Ippen Digital (https://www.ippen-digital.de) who are hosting us for this event.
Don't forget to join our Community Slack Team (https://slackpass.io/confluentcommunity)!
If you would like to speak or host our next event please let us know! [masked]
NOTE: We are unable to cater for any attendees under the age of 18. Please do not sign up for this event if you are under 18.