Apache Kafka in Action & GDPR for Your Datastore


Details
# Apache Kafka in Action: Let's Build a Streaming Data Pipeline! (Robin Moffatt)
Have you ever thought that you needed to be a programmer to do stream processing and build streaming data pipelines? Think again!
Apache Kafka is a distributed, scalable, and fault-tolerant streaming platform, providing low-latency pub-sub messaging coupled with native storage and stream processing capabilities. Integrating Kafka with RDBMS, NoSQL, and object stores is simple with Kafka Connect, which is part of Apache Kafka. KSQL is the open-source SQL streaming engine for Apache Kafka, and makes it possible to build stream processing applications at scale, written using a familiar SQL interface.
In this talk we’ll explain the architectural reasoning for Apache Kafka and the benefits of real-time integration, and we’ll build a streaming data pipeline using nothing but our bare hands, Kafka Connect, and KSQL.
Gasp as we filter events in real time! Be amazed at how we can enrich streams of data with data from RDBMS! Be astonished at the power of streaming aggregates for anomaly detection!
# Bio
Robin is a Developer Advocate at Confluent, the company founded by the original creators of Apache Kafka®, as well as an Oracle Groundbreaker Ambassador and ACE Director (alumnus). His career has always involved data, from the old worlds of COBOL and DB2, through the worlds of Oracle and Hadoop, and into the current world with Kafka. His particular interests are analytics, systems architecture, performance testing and optimization. He blogs at http://cnfl.io/rmoff and http://rmoff.net/ (and previously http://ritt.md/rmoff) and can be found tweeting grumpy geek thoughts as @rmoff. Outside of work he enjoys drinking good beer and eating fried breakfasts, although generally not at the same time.
# GDPR Compliance for Your Datastore (Philipp Krenn)
The General Data Protection Regulation (GDPR) is changing how you can handle data in Europe. But what does this actually mean? The first part of this talk gives an overview about the implications of GDPR, which affects every software project with a European relation. That includes users' right to see, edit, and export their data, the right to be forgotten,... The second part takes a look at what this means for actual software projects with the specific use-case of logging. The main focus here is how to stay GDPR compliant while still being able to use the data for security and operation aspects.
PS: This talk does not replace legal advice or a deeper examination of the topic. It gives you an overview and pointers to relevant techniques, but you need to discuss the implementation for your project with your own legal counsel.
# Sponsors
Thanks to Confluent for sponsoring drinks & snacks as well as StockWerk for hosting us.

Canceled
Apache Kafka in Action & GDPR for Your Datastore