tinyML Research Symposium 2021 Poster: Deep Learning for Compute in Memory

tinyML Research Symposium 2021 https://www.tinyml.org/event/research-symposium-2021/
Deep Learning for Compute in Memory

Compute in Memory (CIM) accelerators for neural networks promise large efficiency gains, allowing for deep learning applications on extremely resource-constrained devices. Compared to classical digital processors, computations on CIM accelerators are subject to a variety of noise sources such as process variations, thermal effects, quantization, and more. In this work, we show how fundamental hardware design choices influence the predictive performance of neural networks and how training these models to be hardware-aware can make them more robust for CIM deployment. Through various experiments, we make the trade-offs between energy efficiency and model capacity explicit and showcase the benefits of taking a systems view on CIM accelerator and neural network training co-design.


Leave a Reply

Your email address will not be published.