tinyML EMEA 2022 Michele Magno: In Sensor and On-device Tiny Learning for Next Generation of Smart..
In Sensor and On-device Tiny Learning for Next Generation of Smart Sensors
Michele MAGNO, Head of the Project-based learning Center, ETH Zurich, D-ITET
TinyML encompasses technologies and applications including hardware, algorithms and software capable of performing on-device sensor data analytics at extremely low power, typically in the mW range and below. It’s an exciting area that is pushing IoT developers to build new AI-powered features for intelligent sensors and MCU devices and applications. The process of TinyML involves using neural models that have either been pre-trained by the developers, or that developers train themselves to equip the IoT applications intelligent capability. This is today well consolidated in areas such as Computer Vision, Natural Language Processing, condition monitoring activities recognition etc.
Today the majority of TinyML models are trained on powerful servers in the Cloud and the learning process does not target the limited resources of an IoT devices.
Current TinyML models are inferred directly on an embedded processor (e.g. in a mobile processor or a low power microcontroller). The TinyML model processes input data—like images, text, or audio—on-device rather than sending that data to a server and performing the processing there.
Innovations like the TensorFlow Lite framework or X-Cube-AI make possible to run pre-trained inferences on microcontrollers as those models require a small memory footprint and millions of operation per seconds. On the other hand, today TinyML has mainly focused on a train offline -then deploy assumption, with static models that cannot be adapted to newly collected data without cloud-based data collection and finetuning. Therefore those models are not ronust against concept drift which can reduce significantly the accuracy of a TinyML workload in time varying data conditions. This limit reduces significantly the flexibility and overall scores in real application scenario at the point that frequent offline re training can be required.
A new research trend is based on Continuous Learning (CL) which enables online, serverless adaptation, in principle. However, due to computation and memory-hungry requirements of current techniques, they are more suitable for smart phone deployment than low power micro or milliwatt devices.
Thus, there is a huge opportunity to improve those techniques and optimize them for ultra low power tiny devices that can be deployed in-sensors such as the STMicroelectronics ISPU or ARM Cortex-M4 cores.
This talk will present preliminary promising results on CL algorithms running in-sensors and in ARM Cortex-M microcontrollers for audio and 6-axes inertial sensors data in-sensor learning starting from 6-axes. We will demonstrate the benefits of CL by exploiting the capability of ARM Cortex DSP and STMicrocontroller ISPU in sensors.
source