tinyML Asia 2021 Song Han: Putting AI on a Diet: TinyML and Efficient Deep Learning



tinyML Asia 2021
Plenary Talk: Putting AI on a Diet: TinyML and Efficient Deep Learning
Song HAN, Assistant Professor, MIT EECS

Today’s AI is too big. Deep neural networks demand extraordinary levels of data and computation, and therefore power, for training and inference. In the global shortage of silicon, this severely limits the practical deployment of AI applications. I will present techniques to improve the efficiency of neural network by model compression, neural architecture search, and new design primitives. I’ll present MCUNet that enables ImageNet-scale inference on micro-controllers that have only 1MB of Flash. Next I will introduce Once-for-All Network, an efficient neural architecture search approach, that can elastically grow and shrink the model capacity according to the target hardware resource and latency constraints. Finally I’ll present new primitives for video understanding and point cloud recognition, which is the winning solution in the 3rd/4th/5th Low-Power Computer Vision Challenges and AI Driving Olympics NuScenes Segmentation Challenge. We hope such TinyML techniques can make AI greener, faster, and more accessible to everyone.

source

186 thoughts on “tinyML Asia 2021 Song Han: Putting AI on a Diet: TinyML and Efficient Deep Learning

Leave a Reply

Your email address will not be published.