tinyML Summit 2021 tiny Talks: Supporting Tensorflow Lite MCU in tiny low power FPGAs

tinyML Summit 2021 https://www.tinyml.org/event/summit-2021
tiny Talks
Supporting Tensorflow Lite MCU in tiny low power FPGAs
Hoon CHOI, Fellow, Lattice Semiconductor

The arena of cost optimized, high performance Edge accelerators is growing increasingly competitive with a variety of architectures to choose from when implementing an AI capable system. As new generation of Edge applications emerges, designers are increasingly pressed to develop solutions that combine low power and low latency, they require easy to use and flexible accelerators.

Lattice’s FPGAs are uniquely positioned to address the rapidly changing world for Edge devices. This class of FPGAs possess the parallel processing capabilities inherent in FPGAs to accelerate neural network performance and are HW programmable to keep up with the fast pace of changing ML algorithms. They are designed with higher on chip memory, optimized DSP blocks and compute resources distributed through the fabric for workload acceleration resulting in a low power system implementation.

To provide a software programmable solution that is easy to use, support for TF Lite with soft RISC-V was implemented on the FPGA fabric. Creating best of both world, programmable device with flexible acceleration blocks running on HW to enable developers with or without FPGA expertise to build their systems more quickly. Comparing TF Light implementation on ARM M4 based CPU vs. FPGA of comparable size/cost, the FPGA runs 2~10x faster than the MCU for a comparable power consumption.
In this presentation, we cover the details of the accelerators we designed, the limitations we faced that hindered further optimizations in accelerators, and possible solutions to the limitations.


5 thoughts on “tinyML Summit 2021 tiny Talks: Supporting Tensorflow Lite MCU in tiny low power FPGAs

Leave a Reply

Your email address will not be published.