Skip to content

How Spark can improve your Hadoop Cluster

Photo of Matthias Brandt
Hosted By
Matthias B. and 2 others
How Spark can improve your Hadoop Cluster

Details

Apache Spark is a fast growing Framework to further improve your Big Data Infrastructure and processing of your data. It is written in Scala and reduces the overhead writing MapReduce Jobs in pure Java. After this talk, you are able to start playing around with Apache Spark and know the different parts and the benefits of the Framework.

Brief Introduction to Apache Hadoop
Apache Spark Benefits

  • Caching

  • Lazy Evaluation

  • Spark Streaming

  • Machine Learning

  • GraphX

  • Spark SQL

  • Code Examples for the explained Spark Features in Scala

  • Example Stack of a Spark Infrastructure at Wer liefert was?

Photo of Scala Hamburg group
Scala Hamburg
See more events
Wer liefert was? GmbH
ABC-Straße 21, 20354 Hamburg · Hamburg