tinyML Research Symposium 2021: SWIS – Shared Weight bIt Sparsity for Efficient Neural Network…



tinyML Research Symposium 2021 https://www.tinyml.org/event/research-symposium-2021/
SWIS – Shared Weight bIt Sparsity for Efficient Neural Network Acceleration
Shurui LI, PhD Student, UCLA

Quantization is spearheading the increase in performance and efficiency of neural network computing systems making headway into commodity hardware. We present SWIS – Shared Weight bIt Sparsity, a quantization framework for efficient neural network inference acceleration delivering improved performance and storage compression through an offline weight decomposition and scheduling algorithm. SWIS can achieve up to 52% (19.8%) point accuracy improvement when quantizing MobileNet-v2 to 4 (2) bits post-training (with retraining) showing the strength of leveraging shared bit-sparsity in weights. SWIS accelerator gives up to 6X speedup and 1.8X energy improvement over state of the art bit-serial architectures.

source

Authorization
*
*
Password generation