Model Mondays - On-Device & Local AI

Hosted By
Microsoft R.

Details
On-device inference is critical if you want to run AI models locally, on your own hardware, e.g., for edge computing needs. Join as we talk to Maanav Dalal about Foundry Local β a solution built on ONNX Runtime (for use in CPUs, NPUs & GPUs) & taking you from prototype to production.
π Explore the repo
π’ Continue the conversation on the Discord
π This session is a part of a series! Check it out here

Microsoft Reactor Berlin Pop-Up
See more events
Online event
Link visible for attendees
Sponsors
Model Mondays - On-Device & Local AI
FREE