We stumbled into the rabbit hole and found Kafka


Details
Does your company boasts Big Data? Is it part of your startup's plan to have a data pipeline? Are you a freelance who wants to add Big Data Scaling to your portfolio?
We got you covered!
In our next meetup you are going to learn from battle tested developers tips for managing and migrating big data piplines that scale (by big data we mean billions of messages per day and more).
*** The talks are in English ***
Zoom link for the live broadcast: https://walkme.zoom.us/j/316256722
Agenda:
18:00 - 18:30: Pizza, Beer and networking.
18:30 - 19:10: We stumbled into the rabbit hole and found Kafka, Gadi Raymond, WalkMe
19:10 - 19:20: Break
19:20 - 20:00: Streaming Data Pipeline Using Google Cloud Infrastructure, Haim Cohen, Tikal
Talk #1:
We stumbled into the rabbit hole and found Kafka
Gadi Raymond, senior full-stack developer at WalkMe, will talk about his team strategy of migrating a data pipeline of 2B events per day using RabbitMQ to Apache Kafka
In this talk, you will learn why we decided to move from RabbitMQ to Apache Kafka. We will go over and compare the two message queues with their pros and cons. Finally, Gadi will share practical tips of how to migrate from one pipeline to the other with zero downtime and zero data loss.
Talk #2:
Streaming Data Pipeline Using Google Cloud Infrastructure
Haim Cohen, BigData Tech Leader @ Tikal
Creating near real-time data pipeline for billions of events and terabytes of data can be a challenging task. How can we serve the requests, with low latency and persist them all in less than a second latency end-to-end? What are the do's and don'ts when rolling such a system to production and how to monitor the hundreds of components involved? Come to hear a real-life case of such a system. We will discuss the architecture, technologies, code and DevOps concern of the streaming data pipeline we created for one of the top ten mobile games in the US.

We stumbled into the rabbit hole and found Kafka