Join us for an Apache Kafka meetup on July 24th from 7:15pm, hosted by Crate.io at the Digital Eatery in Berlin. The address, agenda and speaker information can be found below. See you there!
19:15 - Doors Open
19:15 - 20:00 - Mika Naylor - Local Weather Visualisation with IoT, Kafka and CrateDB
20:00 - 20:45 - Robin Moffatt - Apache Kafka and KSQL in Action : Let’s Build a Streaming Data Pipeline!
20:45 - 21:00 - Additional Q&A
21:00 - 21:30 - Pulled Pork & Pulled Vegi in a Mini Ciabatta, Drinks and Networking
Mika Naylor, Developer, Crate
Mika is a developer at Crate.io (http://crate.io/), working primarily with infrastructure, Kubernetes
and Docker. She is also a maintainer of Black, a python code formatter project.
Outside of software she enjoys hardware tinkering, monochromatic colour palletes and
reading on cybernetics, solarpunk, and the politics of technology.
Title of the Talk:
Local Weather Visualisation with IoT, Kafka and CrateDB (https://crate.io/products/cratedb/)
Ever wanted to build your own weather data service? In this talk Mika will talk about
how - from building your own IoT enabled weather station - to setting up
weather data ingestion pipelines with Kafka and CrateDB, to creating a simple API
using Flask for visualisation.
Robin Moffatt, Developer Advocate, Confluent
Robin is a Developer Advocate at Confluent, the company founded by the creators of Apache Kafka, as well as an Oracle ACE Director and Developer Champion. His career has always involved data, from the old worlds of COBOL and DB2, through the worlds of Oracle and Hadoop, and into the current world with Kafka. His particular interests are analytics, systems architecture, performance testing and optimization. He blogs at https://www.confluent.io/blog/author/robin/ and http://rmoff.net/ (and previously http://ritt.md/rmoff ) and can be found tweeting grumpy geek thoughts as @rmoff. Outside of work he enjoys drinking good beer and eating fried breakfasts, although generally not at the same time.
Title of the Talk:
Apache Kafka and KSQL in Action : Let’s Build a Streaming Data Pipeline!
Have you ever thought that you needed to be a programmer to do stream processing and build streaming data pipelines? Think again!
Companies new and old are all recognising the importance of a low-latency, scalable, fault-tolerant data backbone, in the form of the Apache Kafka streaming platform. With Kafka, developers can integrate multiple sources and systems, which enables low latency analytics, event driven architectures and the population of multiple downstream systems. These data pipelines can be built using configuration alone.
In this talk, we’ll see how easy it is to stream data from a database such as Oracle into Kafka using the Kafka Connect API. In addition, we’ll use KSQL to filter, aggregate and join it to other data, and then stream this from Kafka out into multiple targets such as Elasticsearch and MySQL. All of this can be accomplished without a single line of code!
Why should Java geeks have all the fun?
Don't forget to join our Community Slack Team (https://launchpass.com/confluentcommunity) !
If you would like to speak or host our next event please let us know! [masked]
NOTE: We are unable to cater for any attendees under the age of 18. Please do not sign up for this event if you are under 18.