Table of Contents
Fetching ...

NEMESIS: Noise-suppressed Efficient MAE with Enhanced Superpatch Integration Strategy

Kyeonghun Kim, Hyeonseok Jung, Youngung Han, Hyunsu Go, Eunseob Choi, Seongbin Park, Junsu Lim, Jiwon Yang, Sumin Lee, Insung Hwang, Ken Ying-Kai Liao, Nam-Joon Kim

Abstract

Volumetric CT imaging is essential for clinical diagnosis, yet annotating 3D volumes is expensive and time-consuming, motivating self-supervised learning (SSL) from unlabeled data. However, applying SSL to 3D CT remains challenging due to the high memory cost of full-volume transformers and the anisotropic spatial structure of CT data, which is not well captured by conventional masking strategies. We propose NEMESIS, a masked autoencoder (MAE) framework that operates on local 128x128x128 superpatches, enabling memory-efficient training while preserving anatomical detail. NEMESIS introduces three key components: (i) noise-enhanced reconstruction as a pretext task, (ii) Masked Anatomical Transformer Blocks (MATB) that perform dual-masking through parallel plane-wise and axis-wise token removal, and (iii) NEMESIS Tokens (NT) for cross-scale context aggregation. On the BTCV multi-organ classification benchmark, NEMESIS with a frozen backbone and a linear classifier achieves a mean AUROC of 0.9633, surpassing fully fine-tuned SuPreM (0.9493) and VoCo (0.9387). Under a low-label regime with only 10% of available annotations, it retains an AUROC of 0.9075, demonstrating strong label efficiency. Furthermore, the superpatch-based design reduces computational cost to 31.0 GFLOPs per forward pass, compared to 985.8 GFLOPs for the full-volume baseline, providing a scalable and robust foundation for 3D medical imaging.

NEMESIS: Noise-suppressed Efficient MAE with Enhanced Superpatch Integration Strategy

Abstract

Volumetric CT imaging is essential for clinical diagnosis, yet annotating 3D volumes is expensive and time-consuming, motivating self-supervised learning (SSL) from unlabeled data. However, applying SSL to 3D CT remains challenging due to the high memory cost of full-volume transformers and the anisotropic spatial structure of CT data, which is not well captured by conventional masking strategies. We propose NEMESIS, a masked autoencoder (MAE) framework that operates on local 128x128x128 superpatches, enabling memory-efficient training while preserving anatomical detail. NEMESIS introduces three key components: (i) noise-enhanced reconstruction as a pretext task, (ii) Masked Anatomical Transformer Blocks (MATB) that perform dual-masking through parallel plane-wise and axis-wise token removal, and (iii) NEMESIS Tokens (NT) for cross-scale context aggregation. On the BTCV multi-organ classification benchmark, NEMESIS with a frozen backbone and a linear classifier achieves a mean AUROC of 0.9633, surpassing fully fine-tuned SuPreM (0.9493) and VoCo (0.9387). Under a low-label regime with only 10% of available annotations, it retains an AUROC of 0.9075, demonstrating strong label efficiency. Furthermore, the superpatch-based design reduces computational cost to 31.0 GFLOPs per forward pass, compared to 985.8 GFLOPs for the full-volume baseline, providing a scalable and robust foundation for 3D medical imaging.

Paper Structure

This paper contains 16 sections, 6 equations, 5 figures, 5 tables.

Figures (5)

  • Figure 1: The overall architecture of the NEMESIS backbone. The input superpatch is divided into volume patches and processed through the patch embedding module. The framework utilizes dual-masking and MATB-based encoders and decoders to learn high-level anatomical representations in the latent space.
  • Figure 2: Overview of the superpatch-based pretraining pipeline. NEMESIS randomly crops $128^3$ superpatches from the original 3D CT volume to ensure memory efficiency and robust feature extraction.
  • Figure 3: Detailed architecture of the 3D Adaptive Patch Embedding module, integrating Linear Projection (LP) and Superpatch-Embedder (SE) pathways.
  • Figure 4: Detailed architecture of the MATB, featuring parallel axis-wise and plane-wise attention streams to capture comprehensive 3D context.
  • Figure 5: Computational efficiency analysis. (a) Efficiency-quality trade-off. (b) GFLOPs per forward pass. NEMESIS ($SP=128^3$) achieves a $32\times$ reduction in GFLOPs compared to the VAE baseline while maintaining superior reconstruction quality.