The 7 Lines of Code You Need to Run Faster Real-time Inference
Details
MLOps community meetup #120! On January 25, we will be talking to
Adrian Boguszewski, AI Software Evangelist at Intel, about The 7 Lines of Code You Need to Run Faster Real-time Inference.
// Register at:
https://home.mlops.community/home/events
// Abstract:
You've already trained your great neural network. It reaches 99.9% of accuracy and saves the world, so you would like to deploy it. However, it must run in real time and process data locally, and you don't want to build a web API. After all, you are a Data Scientist, not a Web Developer… So, is it possible to automatically optimize and run the network fast on the local hardware you have, not the hardware you wish you had? Absolutely!
During the talk, Adrian will present the OpenVINO Toolkit. You'll learn how to automatically convert the model using Model Optimizer and run the inference with the Runtime. The magic with only seven lines of code. After all, you'll get a step-by-step jupyter notebook to try at home.
// Bio:
AI Software Evangelist at Intel. Adrian graduated from the Gdansk University of Technology in the field of Computer Science 6 years ago. After that, he started his career in computer vision and deep learning. As a team leader of data scientists and Android developers for the previous two years, Adrian was responsible for an application to take a professional photo (for an ID card or passport) without leaving home. He is a co-author of the LandCover.ai dataset and he was teaching people how to do deep learning. His current role is to educate people about OpenVINO Toolkit. In his free time, he’s a traveler. You can also talk with him about finance, especially investments.
// Final thoughts
Please feel free to drop some questions you may have beforehand into our slack channel (https://go.mlops.community/slack)
Watch some old meetups on our youtube channel:
https://www.youtube.com/channel/UCG6qpjVnBTTT8wLGBygANOQ.
