EMEA 2021 Partner Session Neuton.ai



EMEA 2021 https://www.tinyml.org/event/emea-2021/
Partner Session
Avoiding Loss of Quality while in Pursuit of a Tiny Model
Blair NEWMAN, CTO, Neuton.ai

We can see that today the entire tinyML community is focused on solving the model shrinking issue. We are confident that the issue of assessing the quality of the model and its explainability is already relevant for the entire tinyML community, and we will share how we approach this challenge.

In this talk, we’ll show how incredibly compact models can be created without losing focus on precision. During the talk, we plan to provide answers to the following questions, particularly relevant to the tinyML community at this time:

How, in the pursuit of a small model, can we avoid loss of quality?

Is there a choice between model accuracy and size, today?

How can the quality of a model be evaluated, at all stages, without need for a data scientist?

How can the logic of decision making by a model be identified and understood, if you are dealing with a neural network?

How can available training data be evaluated, and the most important statistics in the context of a single variable, overall data, interconnections, and in relation to the target variable in a training dataset be clearly understood?

How can the reason why this particular model made this or that decision be identified? How can a model’s output be interpreted? Do models built with Neural Networks have explainability potential?

Do all parameters from a data source sensor need to be collected to build a model and obtain meaningful insights? What parameters are enough to build a tiny model?

How can the influence and relative importance of every parameter be understood, on the output?

Can the input parameters be emulated to see how output changes and why?

How can the quality of a tiny model be evaluated?

How can model decay, and need for retraining, be automatically identified?

How can the quality of every single prediction be thoroughly evaluated? How can credibility of each prediction be understood and measured, and how can the level of confidence in each prediction be evaluated?

source

Authorization
*
*
Password generation