tinyML Asia 2021
An approach to dynamically integrate heterogenous AI components in a multimodal user authentication system use case
Haochen XIE 謝 昊辰; ｺﾄｲ ｺｳｼﾝ, Project Leader, Team Dragon, AnchorZ Inc.
n this talk, we will introduce our approach to a challenging task: to effectively and dynamically integrate multiple AI-backed components where each component varies in the kind of AI technologies it uses, in order to implement a single functionality — continuous multimodal user authentication.
In building our next-generation user authentication system — DZ Security —, we needed a way to effectively integrate multiple elemental authentication methods, such as facial recognization, voice recognization, touch pattern, etc., that employ very different types of AI technologies, such as DNN, RNN, analytical regression, etc., in a flexible and effective manner. We also needed the combination method to support an open set of elemental authentication methods, some of which may be provided by third parties. Furthermore, we needed to achieve a high degree of confidence that the overall system would perform well enough with regard to certain critical metrics, such as overall security ensurance and energy consumption performance. The latter is especially critical for a battery-powered device.
– Our approach tackles this challenge by firstly defining a common interface that all components must comply to, and developing a DSL (i.e. domain specific language) in which an “fusion” or “integration” program shall be written. The component interface contains unified APIs for invocation of the components, and provides access to performance metrics of each component. Upon the DSL, we then built a framework to make sure that the final system always meets a predefined minimal performance requirements expressed in a few key metrics, such as security risk indicators (e.g. estimated false acceptance rate) and power consumption estimations. This framework also essentially reduces the degree of freedom of the integration program to the equivalent of writing a dynamic strategy that decides when and how each available component should be invoked; where a “smarter” strategy will achieve a higher “score” (e.g. a lower false rejection rate), and no strategy could ever break the predefined requirements. Therefore we can aggressively optimize the component invocation strategy fearlessly without worrying about breaking the minimal performance requirements. The DSL also include a simulator that could be used to evaluate the performance of an integration program in simulated deployment situations, alongside a toolchain to compile for execution on different platforms. We could then use the simulator to guide writing the best strategies, utilizing either human intelligence or artificial intelligence, or both combined.
We hope the sharing of our approach provides hints to others who need to implement similar systems.