Stores, Monoids and Dependency Injection - Abstractions for Spark Streaming Jobs


Details
LIVE-STREAMING URL:
http://www.ustream.tv/channel/spark-meetup-jan-16-2014
Ryan Weald will be presenting an extended version of the talk he gave at Spark Summit 2013. Ryan is a Data Scientist at Sharethrough. You can check out the abstract (http://spark-summit.org/talk/weald-beyond-word-count-productionalizing-spark-streaming/), video (http://www.youtube.com/watch?v=OhpjgaBVUtU), and slides (http://spark-summit.org/wp-content/uploads/2013/10/Productionalizing-Spark-Streaming-Spark-Summit-2013-copy.pdf) from his talk at the summit.
In addition to Ryan's talk, we will be having Patrick Wendell give a brief talk on some of the updates with Spark. Look for mentions about Spark 0.9 and Shark 0.8.1.
Abstract:
One of the most difficult aspects of deploying spark streaming as part of your technology stack is maintaining all the job associated with stream processing jobs. In this talk I will discuss the tools and techniques that Sharethrough has found most useful for maintaining a large number of spark streaming jobs. We will look in detail at the way Monoids and Twitter's Algebrid library can be used to create generic aggregations. As well as the way we can create generic interfaces for writing the results of streaming jobs to multiple data stores. Finally, we will look at the way dependency injection can be used to tie all the pieces together, enabling raping development of new streaming jobs.
Bio:
Ryan Weald is a data scientist at Sharethrough where he works on data infrastructure and services for real time ad targeting and reporting. Ryan is passionate about machine learning, distributed systems, and building data driven products. You can find him on Twitter @rweald

Stores, Monoids and Dependency Injection - Abstractions for Spark Streaming Jobs