Skip to content

Modeling multi-device and scale-out in a compiler

J
Hosted By
Jen P.
Modeling multi-device and scale-out in a compiler

Details

As machine learning models grow in complexity, executing them efficiently across multiple devices is crucial for scalability and performance.

In this talk, Tenstorrent engineer Tapasvi Patel explores compiler-based techniques for modeling multi-device execution and scale-out strategies. He will introduce a representation for multiple devices within the compiler and discuss how tensors and ML model layers can be expressed to facilitate automatic parallelization.

Learn about Tenstorrent's current strategies and goals for implementing automated computation partitioning across devices, and demonstrate how to write ML model code that efficiently targets multiple devices using JAX (and in the future PyTorch). This approach aims to enhance the scalability and efficiency of ML workloads in distributed environments.

Once you RSVP, please also register using the Zoom link!

Photo of Tenstorrent Developers group
Tenstorrent Developers
See more events