tinyML Summit 2021 Partner Session: Tiny and Flexible ML with Lattice FPGA

tinyMl Summit 2021
Partner Sessions – Processing Engines
“Tiny and Flexible ML with Lattice FPGA”
Sreepada. V. HEGADE, Senior Manager, Lattice Semiconductor

The inference of neural networks with resource constrained devices that is fueling the growth of ML at edge is part of entire solution that involves other essential components like data aggregation, augmentation and post processing of inference output. Along with this, introduction of new network topologies at rapid phase to meet every growing demand for accuracy and performance, requires that solutions that supports “Tiny ML” is flexible. Also, the engine that does network inference needs to be tuned for different type of network topology. For example, MobileNet introduced to efficiently implement neural networks on resource constraint devices cannot be efficiently implemented with NN engines designed for normal convolution.
The configurable nature of FPGA devices allow for quick adoption of emerging neural network topologies. The flexible IO also helps to implement data aggregation and other peripheral operations. The soft core implemented on Lattice FPGAs can be changed and/or optimized depending on target network topology. In this talk we discuss how we optimize network topologies and software compiler to get best out of FPGA for end applications.


57 thoughts on “tinyML Summit 2021 Partner Session: Tiny and Flexible ML with Lattice FPGA

Leave a Reply

Your email address will not be published. Required fields are marked *