Deep learning on a mixed cluster w/ Deeplearning4J & Spark by François Garillot


Details
It looks like Santa came early this year!
As some of you may know, in few weeks Barcelona will host the 30th Annual Conference on Neural Information Processing Systems - NIPS (https://nips.cc/)- and this will bring us another great international speaker!
Indeed, in the next meetup, we have the pleasure to host Francois Garillot. His talk will explore how we can integrate GPU optimized clusters with CPU ones by using Deeplearning4J (https://deeplearning4j.org/).
Title:
Deep learning on a mixed cluster with Deeplearning4J and Spark
Abstract:
Running deep learning workloads distributed on several machines equipped with GPUs is the holy grail of training deep learning models fast. But if you're not in a completely new project, you might be doing this with a preexisting cluster, perhaps originally designed to support Hadoop or Spark workloads. In this session, we will have a look at the communication model of these cluster installations, and at what it takes to make them hum with a JVM-friendly library such as DeepLearning4J. We'll give a few hints on the possibilities for resource management offered by the existing cluster managers such as YARN and Mesos with these new workloads. The audience will come back with a clearer idea of what to look for in a deep learning software library when seen with a distributed context in mind.
Bio
When he's not contributing to the myriad of projects around Deeplearning4j, François Garillot is working at Swisscom, where he analyzes the mobility of populations in Switzerland . Previously, he worked on Apache Spark Streaming’s reliability at Lightbend. A select few of his interests span machine learning & deep learning – especially incremental models, approximation & hashing techniques, distributed optimization, and time series analysis. But he also enjoys skiing, sailing and hunting for good cheese in his free time

Deep learning on a mixed cluster w/ Deeplearning4J & Spark by François Garillot