Learning and visualization of features using MC-DCNN for gait training considering physical individual differences

Yusuke Osawa, Keiichi Watanuki, Kazunori Kaede, Keiichi Muramatsu

Research output: Contribution to journalArticlepeer-review

Abstract

Several training methods have been developed to acquire motion information during real-time walking; these methods also feed the information back to the trainee. Trainees adjust their gait to ensure that the measured value approaches the target value, which may not always be suitable for each trainee. Therefore, we aim to develop a gait feedback training system that considers individual differences, classifies the gait of the trainee, and identifies adjustments for body parts and timing. A convolutional neural network (CNN) has a feature extraction function and is robust in terms of each feature position; therefore, it can be used to classify a gait as ideal or non-ideal. Additionally, when the gradient-weighted class activation mapping (Grad-CAM) is applied to the gait classification model, the output measures the influence degree contributed by the trainee's each body part to the classification results. Thus, the trainee can visually determine the body parts that need to be adjusted through the use of the output. In this study, we focused on gaits related to stumbling. We measured the kinematics and kinetics data for participants and generated multivariate gait data, which were labeled as “gait rarely associated with stumbling” class or “gait frequently associated with stumbling” class using clustering with dynamic time warping. Next, the multichannel deep CNN (MC-DCNN) was used to learn the gait using the multivariate gait data and the corresponding classes. Finally, the data for verification were input into the MC-DCNN model, and we visualized the influence degrees of each place of the multivariate gait data for classification using GradCAM. The MC-DCNN model classified gaits with a high accuracy of 97.64±0.40%, and it learned the features that determine the thumb-to-ground distance. The output of the Grad-CAM indicated body parts, timing, and the relative strength of features that have an important effect on the thumb-to-ground distance.

Original languageEnglish
Pages (from-to)1-17
Number of pages17
JournalJournal of Biomechanical Science and Engineering
Volume16
Issue number1
DOIs
Publication statusPublished - 2021
Externally publishedYes

Keywords

  • Gait training
  • Grad-CAM
  • Healthcare
  • Motion analysis
  • Neural network
  • Walking factor

ASJC Scopus subject areas

  • Biomedical Engineering

Fingerprint

Dive into the research topics of 'Learning and visualization of features using MC-DCNN for gait training considering physical individual differences'. Together they form a unique fingerprint.

Cite this