Machine Learning

tinyML On Device Learning Forum – Song Han: On-Device Learning Under 256KB Memory



On-Device Learning Under 256KB Memory
Song HAN, Assistant Professor, MIT EECS

On-device learning enables the model to adapt to new data collected from the sensors. However, the training memory consumption is prohibitive for IoT devices that have tiny memory resources. We propose an algorithm-system co-design framework to make finet uning neural networks possible with only 256KB of memory. On-device learning faces two unique challenges: the quantized graphs of neural networks are hard to optimize due to mixed bit-precision and the lack of normalization; the limited hardware resource (memory and computation) does not allow full backward computation. To cope with the optimization difficulty, we propose quantization-aware scaling to calibrate the gradient scales and stabilize quantized training. To reduce the memory footprint, we propose sparse update to skip the gradient computation of less important layers and sub-tensors. The algorithm innovation is implemented by a lightweight training system, Tiny Training Engine, which prunes the backward computation graph to support sparse updates and offload the runtime auto-differentiation to compile time. Our framework is the first practical solution for on-device transfer learning of visual recognition on tiny IoT devices (e.g., a microcontroller with only 256KB SRAM), using less than 1/100 of the memory of existing frameworks and matching the accuracy of cloud training+edge deployment for the tinyML application VWW. Our study suggests that tiny IoT devices not only can perform inference but also continuously adapt to new data for lifelong learning.

source

Authorization
*
*
Password generation