***PLEASE ENSURE YOU SIGN UP WITH YOUR NAME AND FAMILY NAME AND BRING YOUR PHOTO ID FOR SECURING ENTRANCE TO THE BUILDING ***
5:00pm: Doors open
5:00pm - 6:00pm: Pizza, Drinks and Networking
6:00pm - 6:45pm: Kai Waehner, Confluent - Processing IoT Data from End to End with MQTT and Apache Kafka
6:45pm - 7:30pm: Will Bleker and Gary Stewart, ING - Pipelining the heroes with Kafka and Graph
7:30pm - 7:45pm: small break
7:45pm - 8:30pm: Mic Hussey, Confluent - Apache Kafka and KSQL in Action : Let’s Build a Streaming Data Pipeline!
8:30pm - 9:30pm Additional Q&A & Networking
Kai Waehner works as Technology Evangelist at Confluent: [masked] / @KaiWaehner / www.kai-waehner.de
Processing IoT Data from End to End with MQTT and Apache Kafka
This session discusses end-to-end use cases such as connected cars, smart home or healthcare sensors, where you integrate Internet of Things (IoT) devices with enterprise IT using open source technologies and standards. MQTT is a lightweight messaging protocol for IoT. However, MQTT is not built for high scalability, longer storage or easy integration to legacy systems. Apache Kafka is a highly scalable distributed streaming platform, which ingests, stores, processes and forwards high volumes of data from thousands of IoT devices.
This session discusses the Apache Kafka open source ecosystem as a streaming platform to process IoT data. See a live demo of how MQTT brokers like Mosquitto or RabbitMQ integrate with Kafka, and how you can even integrate MQTT clients to Kafka without MQTT Broker. Learn how to analyze the IoT data either natively on Kafka with Kafka Streams/KSQL or on an external big data cluster like Spark, Flink or Elasticsearch leveraging Kafka Connect.
ING Chapter lead & Middleware Engineer
ING Platform Architect, Distributed Data
Kafka, MQTT, Graph, KSQL and more!
Filling in the gap of our database offerings (e.g. graph) we were inspired by our past experience to bring NoSQL into the financial industry. As with NoSQL, we started with 2 use-cases - one small and one big. Small enough to learn, prove, share and deliver. Big enough to challenge, imagine, impress and expand.
With challenging requirements in availability, scalability and global reach we needed to reconsider our architecture. A strong principle starting point for customer facing services is that we adopt a master-less architecture where possible. In our use-case(s) availability is often more important than consistency however as time goes by and data quality degrades, consistency becomes more problematic.
Imagine, one could design an architecture to remove throughput as a challenge, eliminate migrations and ensure consistency over time by means of re-deployments. We call this the cache cattle pipeline and started viewing our ‘datastore’ as multiple technologies including Apache Kafka to provide a total solution meeting our ultimate demands for the global reach.
In our talk, we will share the use-case(s), architectural overview and the paradigms adopted to help us bring graph database to life in our organisation.
Mic is a Systems Engineer at Confluent
Apache Kafka and KSQL in Action : Let’s Build a Streaming Data Pipeline!
Hopefully you're already familiar with Apache Kafka - the massively scalable technology which allows you to build robust data pipelines. And have come across the plethora of Kafka Connectors that make it easy to join the ends of your data pipeline to other systems such as DBs or Big Data. But what do you do in the middle? Would you like to be able to transform, enrich and filter the data as it moves along the pipeline? Just break out your compiler!
Only kidding, using KSQL it's possible to achieve all this without opening Eclipse at all....