tinyML Summit 2021 tiny Talks: Tree Ensemble Model Compression for Embedded Machine Learning…

tinyML Summit 2021 https://www.tinyml.org/event/summit-2021
Partner Sessions – Tools & Algorithms
Tree Ensemble Model Compression for Embedded Machine Learning Applications
Leslie SCHRADIN, Principal Machine Learning Engineer, Qeexo, Co.

Embedded machine learning models need to have low memory footprint without compromising the classification performance. Tree-based ensemble models are very effective for sensor data machine learning. Depending on the application, they are often superior than neural-network-based models in terms of embedded metrics such as memory footprint, latency, and model performance, and often need less data to reach the same level of accuracy. In this webinar we will discuss generating tree-based ensemble models using well-known algorithms and then performing intelligent pruning and quantization particularly suitable for tinyML applications. Qeexo’s patent-pending algorithms first perform ensemble model compression by selecting the best candidate boosters; these boosters reduce the model size by almost 80% and still capture the classify-ability of full ensemble model. The compression is followed by 16-bit/8-bit quantization to further reduce the memory footprint. Using these techniques, Qeexo AutoML has compressed and quantized Gradient Boosting Machine (GBM), Random Forest (RF), Isolation Forest (IF), eXtreme Gradient Boosting (XGBoost), and Decision Trees (DT), making them much easier to fit into embedded targets. As a result, models generated by Qeexo AutoML have best-in-class latency and memory footprint without sacrificing performance.


170 thoughts on “tinyML Summit 2021 tiny Talks: Tree Ensemble Model Compression for Embedded Machine Learning…

Leave a Reply

Your email address will not be published. Required fields are marked *