EMEA 2021 tiny Talks: Energy-efficient TCN-Extensions for a TNN accelerator

EMEA 2021 https://www.tinyml.org/event/emea-2021
Energy-efficient TCN-Extensions for a TNN accelerator
Tim FISCHER, PhD Student, ETH Zürich

In recent years, the traditional approach of cloud computing for extremely power constrained IoT devices has been increasingly challenged by the emerging paradigm of edge computing. With the surging demand for intelligence on the edge, highly quantized neural networks have become essential for many embedded applications.

Merging the advantages of both highly quantized and temporal neural networks, this work presents novel extensions to an existing ternary neural network (TNN) accelerator architecture (CUTIE) supporting energy-efficient processing of sequential data.
This talk discusses 1) the hardware implementation of a ternary TCN accelerator for modelling sequential data 2) a hardware-friendly mapping of TCN layers in order to exploit the highly parallel nature of the CUTIE architecture.

Leveraging the hardware and software extensions for TCNs, the TNN accelerator is able to process and classify sequential data on the edge. Exploiting its highly unrolled architecture, the accelerator achieves a peak performance of 962 TOp/s/W in a GF 22nm implementation.


274 thoughts on “EMEA 2021 tiny Talks: Energy-efficient TCN-Extensions for a TNN accelerator

Leave a Reply

Your email address will not be published. Required fields are marked *