Deploying Machine Learning Pipeline at Scale In the Cloud


Details
The significant amount of effort required to bring Machine Learning (ML) models into usefully deployable form seems to be the main obstacle inhibiting the Cambrian explosion and adoption of ML products in different industry verticals right now. A lack of a standardized approach for training and serving models at scale usually means it will take a much longer time (approximately 6 - 11 months ) to get ML models developed by Data Scientist into a deployable form that can run continuously without any glitch in production. This is only not acceptable, it inhibits the ability to iterate faster and course-correct in cases where the models are not performing or behaving as expected in the production pipeline.
As Google Cloud partners our company MavenCode specializes in building pipelines that accelerate the process of getting customers building AI and ML into a production-ready state, leveraging on battle tested approaches and frameworks provided by Google. In the past few months, we have been heavily invested in the Kubeflow opensource project and have been able to build a process around the platform for orchestrating and deploying ML and AI models at scale.
In this presentation, I will be working you through the process of bootstrapping Kubernetes Cluster in the cloud for training and serving models at scale using Kubeflow and also discuss all the lessons we have learned in the process.

Deploying Machine Learning Pipeline at Scale In the Cloud