Skip to content

Details

Apache Hadoop and related technologies have enabled enterprises to collect and process huge amount of data continuously. How can we apply machine learning technology (ex., vowpal wabbit, libsvm) against those data on commodity hardware clusters? In this talk, we will examine architecture options based on Hadoop, Spark and Storm. We will examine the strengths and weaknesses of each architecture option. We illustrate how Hadoop/Spark/Storm may join together to enable next-gen machine learning applications.

About the speaker:

Andy Feng is a Distinguished Architect at Yahoo! Hadoop group, a Committer at Apache Storm project, and a Contributor at Apache Spark project.

Members are also interested in