tinyML Summit 2021 https://www.tinyml.org/event/summit-2021
Partner Sessions – Tools & Algorithms
Innovative and Convolutional-Friendly tinyML Architecture for Small-Silicon, Low-Power Devices
Moshe HAIUT, CTO Staff, DSP Group
The concept of Neural Networks (NNs) has evolved from the basic perceptron to a fully connected (FC) NN layer. FC layers are based on a simple math operation – a vector multiplied by a matrix. Most existing DSPs have embedded multiply-accumulate (MAC) logic to handle these tasks. However, FC layers require a large number of parameters, which challenges the tinyML solutions with their limited silicon footprint. In contrast, convolutional layers use fewer parameters, making them a more compelling approach for memory-constrained applications, such as tinyML edge-based solutions. However, 2D convolutional layers add complexity to the computation algorithm, especially when there are multiple channels and when padding, dilation, and stride operations are applied.
To address this problem, what’s needed is a way to reduce the number of cycles and power consumption required to compute 2D convolutional and ConvTranspose layers.
This tinyML talk will show how to do this using the nNetLite, an ultra-low-power programmable NN processor that was developed within DSP Group to solve many of the issues associated with edge processing. The discussion will focus on a certain part of the processor – the Address Generation Unit (AGU) – that was designed specifically to accelerate the computation of complicated convolutional layers in memory and power-constrained ICs. The AGU is also capable of merging convolution and consecutive MaxPooling layers into a single layer which further conserves valuable memory space in tinyML hardware.
Attendees will understand how to use approaches such as the convolutional-friendly AGU hardware architecture to minimize the total number of cycles in convolutional-heavy NNs to design ultra-low-power AI devices that consume only microwatts of power.