Andy presents at tinyML Research Symposium 2021

visual.png

Andy Zhou presents a paper titled “Memory-Efficient, Limb Position-Aware Hand Gesture Recognition using Hyperdimensional Computing” at the “Application / ML Model Design” session of the tinyML Research Symposium held on March 26, 2021. Read the paper here and watch the talk here.

Abstract: Electromyogram (EMG) pattern recognition can be used to classify hand gestures and movements for human-machine interface and prosthetics applications, but it often faces reliability issues resulting from limb position change. One method to address this is dual-stage classification, in which the limb position is first determined using additional sensors to select between multiple position-specific gesture classifiers. While improving performance, this also increases model complexity and memory footprint, making a dual-stage classifier difficult to implement in a wearable device with limited resources. In this paper, we present sensor fusion of accelerometer and EMG signals using a hyperdimensional computing model to emulate dualstage classification in a memory-efficient way. We demonstrate two methods of encoding accelerometer features to act as keys for retrieval of position-specific parameters from multiple models stored in superposition. Through validation on a dataset of 13 gestures in 8 limb positions, we obtain a classification accuracy of up to 93.34%, an improvement of 17.79% over using a model trained solely on EMG. We achieve this while only marginally increasing memory footprint over a single limb position model, requiring 8× less memory than a traditional dual-stage classification architecture.

A Zhou, R Muller, J Rabaey, “Memory-Efficient, Limb Position-Aware Hand Gesture Recognition using Hyperdimensional Computing” at the “Application / ML Model Design,” tinyML Research Symposium, March 2021.

Previous
Previous

Braeden presents at NER’21

Next
Next

Cem and Meraj awarded Apple PhD Fellowships!