Implementing Machine Learning At The IOT Edge

Are you going?

9 people going

Share:

Details

Initially the purposed architecture for IOT machine learning invasion that the edge device would act purely as sensors forwarding data to Artificial Intelligence application running in the Cloud. Over time tow issues have surfaces which indicate the need to push part or all of the applications out towards the cloud edge.

Specifically
1. Latency In many case action based on observations need to be taken in time intervals which are less than typical run trip to a Cloud based application delay.

2. Traffic Volume If grow the number of edge devices and applications grows as predicted the network traffic volume will overwhelm network capacity.

For some applications the possibility of placing increasing computing power outward from the Central Cloud and thereby allowing the AI work to be push out to the Cloud Edge exists.

For many IOT application constraints of power consumption and battery life preclude such a solution. In those case a hybrid approach where the training portion and execution portion of machine learning application are split apart is often a solution. The training portion requiring extensive computing resources remaining on the Cloud Center and execution porting implement on the low power edge device
using algorithm optimized for such devices.

This presentation will discuss the implementation of a time series machine learning application using an approach of this type. Note this application is a time series application, which is more relevant to control application than the more commonly discussed pattern recognition AI applications.

The implementation use TensorFlow run on the Azure Cloud for the model training portion of the M4 low power microcontroller located at the Cloud edge.