Skip to content

Details

Modern machine learning workloads are compute-intensive and require distributed execution. Ray is an open-source, general-purpose, distributed framework that easily scales Python applications and ML workloads from a laptop to a cluster. This talk will cover Ray’s overview, architecture, core concepts, and design patterns. We will demonstrate how Ray can scale training, hyperparameter tuning, and inference from a single node to a cluster, with tangible performance benefits.

Speaker Bio:
Jules S. Damji is a lead developer advocate at Anyscale Inc, an MLflow contributor, and co-author of Learning Spark, 2nd Edition.
He is a hands-on developer with over 25 years of experience and has worked at leading companies, such as Sun Microsystems, Netscape, @Home, Opsware/LoudCloud, VeriSign, ProQuest, Hortonworks, and Databricks, building large-scale distributed systems.
He holds a B.Sc and M.Sc in computer science (from Oregon State University and Cal State, Chico respectively), and an MA in political advocacy and communication (from Johns Hopkins University).

Related topics

Machine Learning
Big Data
Data Mining
Data Science
Predictive Analytics

Sponsors

Booz Allen

Booz Allen

DC2 Org Sponsor

GWU MS in Business Analytics

GWU MS in Business Analytics

Nonparametric sponsor! Providing meeting space.

DC Tech Live

DC Tech Live

Live Stream Sponsor

Inbox America

Inbox America

Normal Sponsor!

You may also like