Loughborough University
Browse
Baao-CMPB-2022-LUPIN-lupin (1).pdf (1.95 MB)

Biosignal-based transferable attention Bi-ConvGRU deep network for hand-gesture recognition towards online upper-limb prosthesis control

Download (1.95 MB)
journal contribution
posted on 2022-07-25, 15:37 authored by Baao Xie, James Meng, Baihua LiBaihua Li, Andy Harland

Upper-limb amputation can significantly affect a person’s capabilities with a dramatic impact on their quality of life. As a biological signal, surface electromyogram (sEMG) provides a non-invasive means to measure underlying muscle activation patterns, corresponding to specific hand gestures. This project aims to develop a real-time deep learning based recognition model to automatically and reliably recognise these complex signals of a wide range of daily hand gestures from amputees and non-amputees. This paper proposes an attention bidirectional Convolutional Gated Recurrent Unit (Bi-ConvGRU) deep neural network for hand-gesture recognition. By training on sEMG data from both amputees and non-amputees, the model can learn to recognise a group of fine-grained hand movements. This is a significantly more challenging and underexplored area, compared to existing studies on coarse-control in lower limbs. One dimensional CNNs are initially used to extract intra-channel features. The novel use of a bidirectional sequential GRU (Bi-GRU) deep neural network allows the exploration of correlation of muscle activation among multichannel sEMG signals from both prior and posterior time sequences. Importantly, the attention mechanism is employed following Bi-GRU layers. This enables the model to learn vital parts and feature weights, increasing robustness to biodata noise and irregularity. Finally, we introduce the first  of its kind transfer learning, demonstrating that a baseline model pre-trained with non-amputee data can be effectively refined with amputee data to build a personalised model for amputees. The attention Bi-ConvGRU was evaluated on the benchmark database Ninapro, and achieved an average accuracy of 88.7%, outperforming the state-of-the-art on 18 gesture recognition by 6.7%. To our knowledge, the developed end-to-end deep learning model is the first of its kind that enables reliable predictive decision making in short time windows (160ms). This reduced latency limits physiological awareness, enabling the potential for real-time, online and thus more intuitive bio-control of prosthetic devices for amputees.

History

School

  • Science

Department

  • Computer Science

Published in

Computer Methods and Programs in Biomedicine

Volume

224

Issue

2022

Publisher

Elsevier

Version

  • AM (Accepted Manuscript)

Rights holder

© Elsevier

Publisher statement

This paper was accepted for publication in the journal Computer Methods and Programs in Biomedicine and the definitive published version is available at https://doi.org/10.1016/j.cmpb.2022.106999

Acceptance date

2022-06-30

Publication date

2022-07-08

Copyright date

2022

ISSN

0169-2607

Language

  • en

Depositor

Prof Baihua Li. Deposit date: 19 July 2022

Article number

106999