tinyML Summit 2021 Keynote: Efficient Audio-Visual Understanding on AR Devices

tinyML Summit 2021 https://www.tinyml.org/event/summit-2021
“Efficient Audio-Visual Understanding on AR Devices”
Vikas CHANDRA, Director, AI, Facebook Reality Labs

Augmented reality (AR) is a set of technologies that will fundamentally change the way we interact with our environment. It represents a merging of the physical and the digital worlds into a rich, context aware user interface delivered through a socially acceptable form factor such as eyeglasses. The majority of these novel experiences in AR systems will be powered by AI because of their superior ability to handle in-the-wild scenarios. A key AR use case is a personalized, proactive and context-aware Assistant that can understand the user’s activity and their environment using audio-visual understanding models. In this presentation, we will discuss the challenges and opportunities in both training and deployment of efficient audio-visual understanding on AR glasses. We will discuss enabling always-on experiences within a constrained power budget using cascaded multimodal models, and co-designing them with the target hardware platforms. We will present our early work to demonstrate the benefits and potential of such a co-design approach and discuss open research areas that are promising for the research community to explore.


638 thoughts on “tinyML Summit 2021 Keynote: Efficient Audio-Visual Understanding on AR Devices

Leave a Reply

Your email address will not be published.