Redis-ML, Deep-Learning and Scale


Details
The May meeting of the San Francisco Redis Meetup will kick off a series of meetups on machine learning, statistic, and data analysis/data science. The first meetup in the series with be a joint talk featuring Tague Griffith from Redis Labs talking on Redis-ML and Malo Marrec the co-founder of Clusterone speaking on deep learning and distributed training.
Malo's Talk: A Tale of Scale - Speeding up Deep Learning with Distributed Training
Developing and training distributed deep learning models at scale is challenging. We will show how to overcome these challenges and successfully build and train a distributed deep neural network with TensorFlow.
First, we will present deep learning on distributed infrastructure and cover various concepts such as experiment parallelism, model parallelism and data parallelism. Then, we will discuss limitations and challenges of each approach. Later, we will demonstrate hands on how to build and train distributed deep neural networks using TensorFlow GRPC (parameter and worker servers) on Clusterone.
Tague's Talk: Making Real-Time Predictive Decisions with Redis-ML
Most of the energy and attention in machine learning focused on the model training side of the problem. Multiple frameworks, in every language, provide developers with access to a host of data manipulation and training algorithms, but until recently developers had virtually no frameworks to build out predictive engines from trained ML models. Most developers resorted to building custom applications, but building highly available, highly performant applications is difficult. Redis in conjunction with the Redis-ML module provides a server framework for developers to build predictive engines with familiar, off-the-shelf components.
6:00 - 6:30 pm - Food and Networking
6:30 - 7:45 pm - Talks
7:45 - 8:00 pm - Q & A, Wrapup, and Networking
We'll have some cool giveaways at the end of the meetup, so be sure to show up - it will be a great night.

Redis-ML, Deep-Learning and Scale