Skip to content

Scale R to Big Data Using Hadoop and Spark

Photo of Phuc Duong
Hosted By
Phuc D.
Scale R to Big Data Using Hadoop and Spark

Details

Outline:

· Setup a Spark cluster with R installed (R server)

· Wrangle data that is inside HDFS using R

· Build and deploy a machine learning model using R

Code and Prep Work (if you want to follow along):

https://github.com/datasciencedojo/meetup/tree/master/scaling_r_to_big_data

Description:

R is currently one of the most popular data science languages in the world. However, it’s always had constraints around scaling out to big data. What happens when you expand beyond a couple gigabytes of data? You packed up your data and you used something else; Python, Java, or Mahout to name a few. Now it’s possible to stick with R throughout your production analysis all the way to deployment, regardless of the data size.

Companies like Apache, Revolution Analytics, Microsoft, and H20 showed us this year that distributed computing in R is possible. Today we’ll take a look at what the Microsoft stack is doing in terms of scaling R up to big data.

In this talk we will show you Microsoft R Server, which is a Hadoop or Spark cluster where R is installed on every computer and is equipped with distributed processing libraries to utilize each and every computer in parallel. We’ll show you how to run your normal native R code via SSH, and how to get an RStudio server up and running on the cluster.

We’ll show you how to wrangle data out of an HDFS and build machine learning models from your large dataset. Then show you how to pack up that model and deploy it to an elastically scaled web service so that anyone may call upon it for predictions and insights.

(If we have time we’ll show you how to visualize the data out of HDFS and into PowerBI)

Snacks and Refreshments will be served.

Photo of Data Science Dojo – Seattle group
Data Science Dojo – Seattle
See more events