Model Mondays - On-Device & Local AI

Hosted By
Microsoft R.

Details
On-device inference is critical if you want to run AI models locally, on your own hardware, e.g., for edge computing needs. Join as we talk to Maanav Dalal about Foundry Local – a solution built on ONNX Runtime (for use in CPUs, NPUs & GPUs) & taking you from prototype to production.
🔎 Explore the repo
📢 Continue the conversation on the Discord
📌 This session is a part of a series! Check it out here

Microsoft Reactor Berlin Pop-Up
Mehr Events anzeigen
Online-Event
Link für Teilnehmer sichtbar
Sponsoren
Model Mondays - On-Device & Local AI
KOSTENLOS