Storm, Kafka, and Hadoop - Solution Architecture pattern
As the interest in Big Data processing grows and the value proposition for Big Data because clearer, enterprises undertake efforts towards knowledge extraction from large volumes of data. Tools such as Hadoop have been instrumental in enabling companies to quickly process large volumes of enterprise data and derive insights from mountains of historical information. At the same time, many companies realize the importance of being able to efficiently process large volumes of new data deriving actionable intelligence in near real time as new data emerges. With that, organizations are faced with two seemingly distinct technical challenges of processing historical data while also needing to deliver actionable intelligent from live data feeds.
- Discuss Enterprise challenges
- Business - Continuous insights from historical information and live data feeds
- Technology - Platform to adapt, process, and correlate large volumes and new data streams
- Present Reference architecture - Demo
- Example solution that uses Hadoop, Kafka, and Storm in tandem to optimize processing
- Architecture paradigm and key implementation details of a prototype
- Maximize infrastructure and code reuse.
- Share experimental results and scaling trends of a real-world application that uses above architecture