Minimizing resource usage in microcontrollers for cost effective solutions
Senior Fellow, Chief AI Architect
The spotlight will be on Fine-Tuning strategy – approach that plays a critical role in optimizing neural network inference. We’ll illustrate how Apache TVM, an open-source machine learning compiler, significantly aids in reducing device cost and energy consumption.
We’ll also discuss our choice of Micro TVM and Arm Ethos-U55 on Alif E5 SoC, detailing their unique advantages and how they align with our goals, and revealing the results of our optimization efforts.
Finally, we’ll provide an exciting sneak peek into the future of our TinyML efforts, including upcoming products, our goal to further refine networks while maintaining accuracy, and our innovative strategies for ongoing optimization and development in the TinyML field.