Table of Contents
Fetching ...

FatigueFormer: Static-Temporal Feature Fusion for Robust sEMG-Based Muscle Fatigue Recognition

Tong Zhang, Hong Guo, Shuangzhou Yan, Dongkai Weng, Jian Wang, Hongxin Zhang

Abstract

We present FatigueFormer, a semi-end-to-end framework that deliberately combines saliency-guided feature separation with deep temporal modeling to learn interpretable and generalizable muscle fatigue dynamics from surface electromyography (sEMG). Unlike prior approaches that struggle to maintain robustness across varying Maximum Voluntary Contraction (MVC) levels due to signal variability and low SNR, FatigueFormer employs parallel Transformer-based sequence encoders to separately capture static and temporal feature dynamics, fusing their complementary representations to improve performance stability across low- and high-MVC conditions. Evaluated on a self-collected dataset spanning 30 participants across four MVC levels (20-80%), it achieves state-of-the-art accuracy and strong generalization under mild-fatigue conditions. Beyond performance, FatigueFormer enables attention-based visualization of fatigue dynamics, revealing how feature groups and time windows contribute differently across varying MVC levels, offering interpretable insight into fatigue progression.

FatigueFormer: Static-Temporal Feature Fusion for Robust sEMG-Based Muscle Fatigue Recognition

Abstract

We present FatigueFormer, a semi-end-to-end framework that deliberately combines saliency-guided feature separation with deep temporal modeling to learn interpretable and generalizable muscle fatigue dynamics from surface electromyography (sEMG). Unlike prior approaches that struggle to maintain robustness across varying Maximum Voluntary Contraction (MVC) levels due to signal variability and low SNR, FatigueFormer employs parallel Transformer-based sequence encoders to separately capture static and temporal feature dynamics, fusing their complementary representations to improve performance stability across low- and high-MVC conditions. Evaluated on a self-collected dataset spanning 30 participants across four MVC levels (20-80%), it achieves state-of-the-art accuracy and strong generalization under mild-fatigue conditions. Beyond performance, FatigueFormer enables attention-based visualization of fatigue dynamics, revealing how feature groups and time windows contribute differently across varying MVC levels, offering interpretable insight into fatigue progression.

Paper Structure

This paper contains 42 sections, 3 equations, 7 figures, 10 tables.

Figures (7)

  • Figure 1: Overview of sEMG characteristics and fatigue dynamics. (a) physiological sEMG and noise; (b) raw sEMG across MVC levels; (c) fatigue-sensitive features (e.g., MF, DET) decreasing with fatigue.
  • Figure 2: Overview of the proposed semi–end-to-end framework. Left: the sEMG feature extraction engine produces increasing- and decreasing-type descriptors, which are fed into the temporal and static encoders in parallel. Right: both encoders adopt a Transformer-based sequence module with similar structure but independent parameters. Their fused representations are finally used for fatigue recognition.
  • Figure 3: The feature tokenizer maps an $n$-dimensional statistical feature vector into an $(n{+}1)\!\times\! D$ embedding sequence.
  • Figure 4: Temporal attention visualization across and within MVC levels. (a) Cross-MVC heatmap illustrates the evolution of attention intensity across time windows (W1--W5) under different contraction levels (20--80% MVC). (b--c) Representative attention distributions at low (20%) and high (80%) MVC show how the model reallocates its temporal focus across fatigue states (relaxed, exerted, fatigued).
  • Figure 5: Static attention maps across MVC levels. (a) Self-attention maps show feature-group dependencies between increasing (INC) and decreasing (DEC) signals across four MVC levels (20%, 40%, 60%, 80%). (b) Cross-attention maps illustrate inter-group interactions (DEC$\rightarrow$INC and INC$\rightarrow$DEC) under the same MVC ordering.
  • ...and 2 more figures