tinyML Summit 2021 https://www.tinyml.org/event/summit-2021
Keynote Song Han: Putting AI on a Diet: TinyML and Efficient Deep Learning
Song HAN – MIT EECS
Deep learning is computation-hungry and data-hungry. We aim to improve the computation efficiency and data efficiency of deep learning. First, I’ll present MCUNet that brings deep learning to IoT devices. MCUNet is a framework that jointly designs the efficient neural architecture (TinyNAS) and the light-weight inference engine (TinyEngine), enabling ImageNet-scale inference on IoT devices that have only 1MB of Flash. Next I will talk about TinyTL that enables on-device transfer learning, reducing the memory footprint by 7-13x. Finally, I will describe Differentiable Augmentation that enables data-efficient GAN training, generating photo-realistic images using only 100 images, which used to require tens of thousand of images will be discribed. It is hopeful that such TinyML techniques can make AI greener, faster, and more sustainable.
 MCUNet: Tiny Deep Learning on IoT Devices, NeurIPS’20, spotlight.
 TinyTL: Reduce Memory, Not Parameters for Efficient On-Device Learning, NeurIPS’20
 Differentiable Augmentation for Data-Efficient GAN Training, NeurIPS’20