Skip to content

Our First Kafka Meetup with 2 amazing speakers from Confluent

Photo of Ale Murray
Hosted By
Ale M.
Our First Kafka Meetup with 2 amazing speakers from Confluent

Details

Join us for our first Zürich Apache Kafka meetup on September 19th from 6:00pm, hosted by Scigility in Zürich. The address is Europaallee 41, Room 101-103, 8004 Zürich. The agenda and speaker information can be found below. See you there!

-----

Agenda:

6:00pm: Doors open
6:00pm - 6:30pm: Networking, Pizza and Drinks
6:30pm - 7:00pm: Rethinking Stream Processing with Apache Kafka: Applications vs. Clusters, Streams vs. Databases - Michael Noll, Confluent
7:00pm - 7:30pm: Highly Scalable Machine Learning and Deep Learning in Real Time with Apache Kafka’s Streams API - Kai Waehner, Confluent
7:30pm - 8:00pm: Q&A and additional Networking

-----

Speaker:
Michael Noll

Bio:
Michael Noll is a product manager at Confluent, the company founded by the creators of Apache Kafka. Previously, Michael was the technical lead of DNS operator Verisign’s big data platform, where he grew the Kafka, Hadoop, and Storm-based infrastructure from zero to petabyte-sized production clusters spanning multiple data centers—one of the largest big data infrastructures in Europe at the time. He is a well-known tech blogger in the big data community (www.michael-noll.com). In his spare time, Michael serves as a technical reviewer for publishers such as Manning and is a frequent speaker at international conferences, including ACM SIGIR, ApacheCon, and Strata. Michael holds a PhD in computer science.

Title:
Rethinking Stream Processing with Apache Kafka: Applications vs. Clusters, Streams vs. Databases

Abstract:
Modern businesses have data at their core, and this data is changing continuously. How can we harness this torrent of information in real-time? The answer is stream processing, and the technology that has since become the core platform for streaming data is Apache Kafka. Among the thousands of companies that use Kafka to transform and reshape their industries are the likes of Netflix, Uber, PayPal, and AirBnB, but also established players such as Goldman Sachs, Cisco, and Oracle.

Unfortunately, today’s common architectures for real-time data processing at scale suffer from complexity: there are many technologies that need to be stitched and operated together, and each individual technology is often complex by itself. This has led to a strong discrepancy between how we, as engineers, would like to work vs. how we actually end up working in practice.

In this session we talk about how Apache Kafka helps you to radically simplify your data architectures. We cover how you can now build normal applications to serve your real-time processing needs — rather than building clusters or similar special-purpose infrastructure — and still benefit from properties such as high scalability, distributed computing, and fault-tolerance, which are typically associated exclusively with cluster technologies. We discuss common use cases to realize that stream processing in practice often requires database-like functionality, and how Kafka allows you to bridge the worlds of streams and databases when implementing your own core business applications (inventory management for large retailers, patient monitoring in healthcare, fleet tracking in logistics, etc), for example in the form of event-driven, containerized microservices.

-----

Speaker:
Kai Waehner

Bio:
Kai Waehner works as Technology Evangelist at Confluent. Kai’s main area of expertise lies within the fields of Big Data Analytics, Machine Learning, Integration, Microservices, Internet of Things, Stream Processing and Blockchain. He is regular speaker at international conferences such as JavaOne, O’Reilly Software Architecture or ApacheCon, writes articles for professional journals, and shares his experiences with new technologies on his blog ( http://www.kai-waehner.de/blog) . Contact and references: kontakt@kai-waehner.de / @KaiWaehner / www.kai-waehner.de (http://www.kai-waehner.de/)

Title:
Highly Scalable Machine Learning and Deep Learning in Real Time with Apache Kafka’s Streams API

Abstract:

Intelligent real time applications are a game changer in any industry. This session explains how companies from different industries build intelligent real time applications. The first part of this session explains how to build analytic models with R, Python or Scala leveraging open source machine learning / deep learning frameworks like TensorFlow or H2O. The second part discusses the deployment of these built analytic models to your own applications or microservices by leveraging the Apache Kafka cluster and Kafka’s Streams API instead of setting up a new, complex stream processing cluster. The session focuses on live demos and teaches lessons learned for executing analytic models in a highly scalable, mission-critical and performant way.

-----

Special thanks to Scigility who are hosting us for this event.

Don't forget to join our Community Slack Team (https://slackpass.io/confluentcommunity)!

If you would like to speak or host our next event please let us know! community@confluent.io

NOTE: We are unable to cater for any attendees under the age of 18. Please do not sign up for this event if you are under 18.

Photo of Zürich Apache Kafka® Meetup by Confluent group
Zürich Apache Kafka® Meetup by Confluent
See more events