Past Meetup

Never-Ending Data Streams - Big Data with Storm,Kafka,Angular and D3.js

This Meetup is past

328 people went


Storm supports the construction of topologies that transform unterminated streams of data. Those transformations, unlike Hadoop jobs, never stop, instead continuing to process data as it arrives.
Following our last meetup reviews and comments, we will focus on "Experimenting using Micro-service to establish your Realtime BigData solution with Storm and Kafka" lecture and will extend it scope using D3.js and architecture case study.

16:30-16:45 Gathering & Networking

16:45-17:40 Experimenting using Micro-service to establish your Realtime BigData solution with Storm and Kafka.
Kafka is a high-throughput distributed messaging system, and Storm is a distributed and fault-tolerant real-time computation. Both technologies can be elastically and transparently expanded without downtime. This session presents the main concepts of Kafka and Storm , and then we'll show how a simple stream-processing "micro-service" module is implemented and integrated with an existing application using these two technologies.
~45min By Yanai Franchi

17:40-18:30 Visualizing Data streams using Angular & D3.js
Today's applications generate huge amount of data. In order to be useful, the data has to be summarized and visualized concisely. In this talk we will learn about D3.js, the web developer Swiss Army Knife for visualizing and working with data. Then we will take it one step further, and integrate our D3-based data visualizations into an Angular.JS application.
~50min By Uri Shaked

18:30-18:40 Break

18:40-19:20 LivePerson BigData Case study
In LivePerson we collect a lot of customer data. The data is stored in Hadoop and can be used for batch processing and querying. Last year, we introduced Kafka and Storm to complete a big data solution for Real-time processing in addition to batch processing.

In this lecture we will introduce the integration solution in LivePerson. We will also address some important issues in the solution: 1) High Availability ; 2) Data consistency; 3) Data format and schema enforcement; 4) Auditing data integrity

~40min By Ran Silberman