AI/ML virtual seminar March 18: Putting AI on a Diet: TinyML and Efficient Deep Learning

Title: Putting AI on a Diet: TinyML and Efficient Deep Learning

Date and time: Thursday, March 18, 1 p.m. – 2 p.m. Central Time

Speaker: Song Han, MIT

Abstract: Today’s AI is too big. Deep neural networks demand extraordinary levels of compute, and therefore power, for training and inference. This severely limits the practical deployment of AI in edge devices. We aim to improve the efficiency of deep learning. First, I’ll present MCUNet that brings deep learning to IoT devices. MCUNet is a framework that jointly designs the efficient neural architecture (TinyNAS) and the light-weight inference engine (TinyEngine), enabling ImageNet-scale inference on IoT devices that have only 1MB of Flash. Next I will talk about TinyTL that enables on-device transfer learning, reducing the memory footprint by 7-13x.  Finally, I will describe Differentiable Augmentation that enables data-efficient GAN training, generating photo-realistic images using only 100 images, which used to require tens of thousand of images. We hope such TinyML techniques can make AI greener, faster, and more sustainable.

Zoom details: