File(s) under permanent embargo

Reason: If you own the copyright in this thesis and you would like the thesis to be made available, please email

Adaptive frameworks for robust myoelectric hand gesture prediction using machine learning and deep learning

posted on 20.05.2021, 14:10 by Carl Robinson
Natural, dependable prosthesis operation using a myoelectric interface is an extremely difficult and challenging problem. The technique utilises surface electromyography (sEMG), detected as electrical signals representing muscle activity and converts them to corresponding prosthetic actions. The research conducted herein places a focus on supplying reliable operational performance and movement dexterity via myoelectric control using machine learning (ML) and deep learning (DL) strategies. The intention is to investigate the probability of supplying an upper-limb amputee with the capability of accurately completing fine-grained hand gestures. To achieve this aim, three criteria are established as performance measures to evaluate the ML and DL solutions employed, namely robustness, adaptability, and continuous-simultaneous (C&S) control.
The work first investigated the classical ML methodology, in particular the feature engineering process to ascertain whether feature-set size was important and if a unique, robust feature set could be established. Combinations of time domain (TD) features were created using a series of 17 hand gestures performed by 11 subjects, taken from Database 2 of the Ninapro benchmark repository. Features were extracted using a sliding window process and applied to five ML classifiers, of which Random Forest (RF) performed best. Results suggested a feature configuration of a few simple features, root mean square (RMS), waveform length (WL), and slope sign changes (SSC), achieved comparable performance (90.53% classification accuracy) to larger and state-of-the-art feature sets (90.57%). This was built upon by research exploring the addition of time-frequency domain (TFD) features, based on wavelet transforms and their effectiveness on both intact and amputee subjects. Features were extracted from the resulting coefficients of a discrete wavelet transform (DWT), performed on the same 17 hand gestures of Ninapro Database 2 (intact) and Database 3 (amputee), creating 18 feature configurations. The aim was to identify any optimum configurations that combined features from both domains and whether there was consistency across subject type for any standout features. Findings reported a five-feature, combined-domain configuration of TD-based RMS, WL, SSC and TFD-based RMS and standard deviation, performed best for intact and amputee subjects (90.98% and 75.16%, respectively). The minimal accuracy improvement suggested there was limited scope for adding the computationally-heavy DWT and building a single, absolute feature configuration. More focus should therefore be applied to enhancing the classification method for robust, adaptable operation.
The next research component explored the requirements for accurately predicting user intention using DL when performing fine-grained hand movements. This signified a switch from classification to a regression approach to enable investigation into C&S control for multiple joints of the hand. The focus was on combining a feature engineering process with the effective capability of DL to further identify salient biological characteristics. The established TD three-feature configuration was used, taken from 17 hand gestures of 40 subjects from Ninapro Database 2. The sEMG feature data were mapped to six sensors from a CyberGlove II, located at wrist, finger, and thumb joints of interest, representing associated hand kinematic data. A two-layer bidirectional gated recurrent unit (Bidi-GRU) model proved most consistent with a root mean square error (RMSE) of 3.50 and 98.04% R² scored during prediction tests.
For the final element, a study of a DL-only solution to robust, adaptable, C&S control of multiple hand joints was undertaken. The feature learning capabilities of four DL models using both raw sEMG and feature data were compared against the established feature-based Bidi-GRU. Additionally, three classical ML algorithms were compared with. A novel database (LeapMyo) was created using low-cost wearable sensor hardware to acquire sEMG and joint angle data from 14 joints of the hand, for 12 subjects performing 12 hand gestures. Data augmentation of the feature input was also employed to explore any enhancement on DL model performance. Finally, a mapping framework was developed using a partial least squares (PLS) method to directly predict hand kinematic data from sEMG. It was found that combining a feature engineering input with temporal DL model (FEAT-BGRU) gave the best prediction performance (1.92° RMSE) when compared to feature learning and classical ML strategies, even with data augmentation applied. This advocates a place for human expertise still, despite the continued growth of more automated DL approaches. Interestingly, when adding batch normalisation (BN) layers to this model, the performance improved further (1.78° RMSE and 99.37% R²), which we explain as BN providing focus to layer output, providing a stronger data range from which the fully connected layer can make a more accurate joint angle prediction. The PLS framework performed less well (13.80° RMSE and 69.38% R²), indicating the importance of utilising kinematic history data for model input, alongside sEMG features.



  • Science


  • Computer Science


Loughborough University

Rights holder

© Carl Peter Robinson

Publication date



A Doctoral Thesis. Submitted in partial fulfilment of the requirements for the award of the degree of Doctor of Philosophy of Loughborough University.




Baihua Li ; Qinggang Meng ; Matthew Pain

Qualification name


Qualification level


This submission includes a signed certificate in addition to the thesis file(s)

I have submitted a signed certificate