tinyML Talks Pakistan: FFConv: An FPGA-based Accelerator for Fast Convolution Layers in…
FFConv: An FPGA-based Accelerator for Fast Convolution Layers in Convolutional Neural Network
Muhammad Adeel Pasha
Associate professor
Department of Electrical Engineering
Lahore University of Management Sciences
Image classification is known to be one of the most challenging problems in the domain of computer vision. Significant research is being done on developing systems and algorithms improving accuracy, performance, area, and power consumption for related problems.
Convolutional Neural Networks (CNNs) have been shown to give outstanding accuracies for problems such as image classification, object detection and semantic segmentation. While CNNs are pioneering the development of high accuracy systems, their excessive computational complexity presents a barrier for a more permeated deployment. Although Graphical Processing Units (GPUs), due to their massively parallel architecture, have been shown to give performance orders of magnitude better than general-purpose processors, the former are limited by their higher power consumption and generality.
Consequently, Field Programmable Gate Arrays (FPGAs) are being explored to implement CNN architectures as they also provide massively parallel logic resources but with a relatively lower power consumption than GPUs. In this talk, we present FFConv, an efficient FPGA-based fast convolutional layer accelerator for CNNs. We design a pipelined, high-throughput convolution engine based on the Winograd minimal filtering (also called Fast Convolution) algorithms for computing the convolutional layers of three popular CNN architectures, VGG16, Alexnet, and Shufflenet. We implement our accelerator on a Virtex-7 FPGA platform where we exploit the computational parallelization to the maximum while exploring optimizations aimed at improving performance.
source