tinyML Summit 2022: Expedited Model Deployment on the Edge with Recipes



tinyML Summit 2022
Expedited Model Deployment on the Edge with Recipes
Haya SRIDHARAN, Technical Product Manager, Latent AI

It is no secret that AI models need to be highly optimized to work efficiently on the edge. But model optimization is really challenging due to resource constraints on edge devices, lack of visibility into optimization tools and developer frustration from dealing with the idiosyncrasies of different hardware targets, compilers and development frameworks. To mitigate these challenges, we have come up with a recipe-based framework. Those recipes abstract the complexity of ML optimization away from developers, and allow developers to easily optimize their ML Models. All you have to do is “Bring your own data”, and the recipe helps you crank out models through deployment in a robust and consistent manner

source

Authorization
*
*
Password generation