Staging Reactive Data Pipelines using Kafka as the Backbone


Details
Happy New Year, everyone! We are pleased to have Jaakko Pallari (https://twitter.com/lepovirta) from Cake Solutions (https://twitter.com/cakesolutions) as our first guest in 2017. Join us on the 18th of January to learn about building reactive data pipelines using the popular Kafka platform.
The session
Kafka has become the de facto platform for reliable and scalable distribution of high volumes of data. However, as a developer, it can be challenging to figure out the best architecture and consumption patterns for interacting with Kafka while delivering quality of service such as high availability and delivery guarantees. It can also be difficult to understand the various streaming patterns and messaging topologies available in Kafka.
Cake Solutions builds highly distributed and scalable systems using Kafka as their core data pipeline. In this talk, Jaakko will present the patterns they've successfully employed in production and provide the tools and guidelines for other developers to choose the most appropriate fit for a given data processing problem. The key points for the presentation are:
• patterns for building reactive data pipelines
• high availability and message delivery guarantees
• clustering of application consumers
• topic partition topology
• offset commit patterns
• performance benchmarks
• custom reactive, asynchronous, non-blocking Kafka driver
The speaker
Jaakko Pallari (https://twitter.com/lepovirta) is a software engineer at Cake Solutions Ltd (https://twitter.com/cakesolutions). He is passionate about practical use of functional programming, robust software, and free and open source software. He started his career as a Java web developer, and he's currently responsible for developing a global scale IoT platform using the SMACK stack tools.
Live stream: https://youtu.be/GuF1I76JXFo
Food and Beverages will be provided.
Don't forget to follow us on Twitter!! @mcrgeeknights (https://twitter.com/mcrgeeknights)

Staging Reactive Data Pipelines using Kafka as the Backbone