A TinyML Approach to Deploy Reduced-Order Model of Complex Systems on Microprocessor
Sr. Product Marketing Manager
TinyML brings intelligence to embedded devices, such as low-cost and energy-constrained microprocessors. Compression techniques, such as architecture search, pruning, and quantization, are used to accelerate model inference. In this talk, we begin with large-scale, high-fidelity non-linear models based on mathematical or physical formulations. We then train Reduced Order Models (ROM) using input-output data from the original first-principles model and test the generated C/C++ code before deploying it to embedded targets. We demonstrate the effectiveness of this approach using State of Charge (SoC) estimation for battery management onboard virtual vehicles as an example. This workflow captures the dynamics of the complex system while reducing the computational time required for simulations and real-time applications.