Table of Contents
Fetching ...

Structural Attention: Rethinking Transformer for Unpaired Medical Image Synthesis

Vu Minh Hieu Phan, Yutong Xie, Bowen Zhang, Yuankai Qi, Zhibin Liao, Antonios Perperidis, Son Lam Phung, Johan W. Verjans, Minh-Son To

TL;DR

Unpaired multi-modal medical image synthesis enables cross-modality diagnostics without costly paired data, but vanilla Transformer architectures struggle due to weak inductive biases. The authors propose UNest, a UNet-like Structured Transformer that injects a structural bias through foreground-specific structural attention and background local attention, guided by SAM-derived foreground masks. The method combines a patch-level foreground classifier with a dual-attention ST block and a CycleGAN-based objective plus a foreground-mask loss, achieving substantial gains across six tasks in MR, CT, and PET (MRXFDG and AutoPET datasets). Results show up to a 19.3% MAE reduction over strong baselines and statistically significant improvements in MAE, PSNR, and SSIM, demonstrating the value of structural priors for unpaired medical image synthesis and better cross-modality translation in clinical settings.

Abstract

Unpaired medical image synthesis aims to provide complementary information for an accurate clinical diagnostics, and address challenges in obtaining aligned multi-modal medical scans. Transformer-based models excel in imaging translation tasks thanks to their ability to capture long-range dependencies. Although effective in supervised training settings, their performance falters in unpaired image synthesis, particularly in synthesizing structural details. This paper empirically demonstrates that, lacking strong inductive biases, Transformer can converge to non-optimal solutions in the absence of paired data. To address this, we introduce UNet Structured Transformer (UNest), a novel architecture incorporating structural inductive biases for unpaired medical image synthesis. We leverage the foundational Segment-Anything Model to precisely extract the foreground structure and perform structural attention within the main anatomy. This guides the model to learn key anatomical regions, thus improving structural synthesis under the lack of supervision in unpaired training. Evaluated on two public datasets, spanning three modalities, i.e., MR, CT, and PET, UNest improves recent methods by up to 19.30% across six medical image synthesis tasks. Our code is released at https://github.com/HieuPhan33/MICCAI2024-UNest.

Structural Attention: Rethinking Transformer for Unpaired Medical Image Synthesis

TL;DR

Unpaired multi-modal medical image synthesis enables cross-modality diagnostics without costly paired data, but vanilla Transformer architectures struggle due to weak inductive biases. The authors propose UNest, a UNet-like Structured Transformer that injects a structural bias through foreground-specific structural attention and background local attention, guided by SAM-derived foreground masks. The method combines a patch-level foreground classifier with a dual-attention ST block and a CycleGAN-based objective plus a foreground-mask loss, achieving substantial gains across six tasks in MR, CT, and PET (MRXFDG and AutoPET datasets). Results show up to a 19.3% MAE reduction over strong baselines and statistically significant improvements in MAE, PSNR, and SSIM, demonstrating the value of structural priors for unpaired medical image synthesis and better cross-modality translation in clinical settings.

Abstract

Unpaired medical image synthesis aims to provide complementary information for an accurate clinical diagnostics, and address challenges in obtaining aligned multi-modal medical scans. Transformer-based models excel in imaging translation tasks thanks to their ability to capture long-range dependencies. Although effective in supervised training settings, their performance falters in unpaired image synthesis, particularly in synthesizing structural details. This paper empirically demonstrates that, lacking strong inductive biases, Transformer can converge to non-optimal solutions in the absence of paired data. To address this, we introduce UNet Structured Transformer (UNest), a novel architecture incorporating structural inductive biases for unpaired medical image synthesis. We leverage the foundational Segment-Anything Model to precisely extract the foreground structure and perform structural attention within the main anatomy. This guides the model to learn key anatomical regions, thus improving structural synthesis under the lack of supervision in unpaired training. Evaluated on two public datasets, spanning three modalities, i.e., MR, CT, and PET, UNest improves recent methods by up to 19.30% across six medical image synthesis tasks. Our code is released at https://github.com/HieuPhan33/MICCAI2024-UNest.

Paper Structure

This paper contains 4 sections, 11 equations, 5 figures, 2 tables.

Figures (5)

  • Figure 1: (a) Synthetic MR-to-CT results of different ViT methods: ResViT dalmaz2022resvit, UNETR hatamizadeh2022unetr, and our proposed UNest, where our UNest most accurately preserves the structural carvity. (b) Attention maps for two patches in a smooth brain region (highlighted by a purple star) a structural nasal cavity (indicated by a red star). Transformer methods tend to focus on less relevant background features.
  • Figure 2: (a) UNest architecture uses Structural Transformer blocks (pink block), as shown in (b), to perform dual-attention strategy on foreground and background tokens separately. The decoder upsamples features using deconvolutional layers and skip-connections from early encoder layers. (c) CycleGAN with UNest generators.
  • Figure 3: Visual results for PET-to-CT on AutoPET dataset gatidis2022whole.
  • Figure 4: Visual results of different methods for MRI-to-PET and MRI-to-CT translation on MRXFDG dataset.
  • Figure 5: Left: Error maps of synPET produced by UNest with FG-S + BG-S attention, and our hybrid S-L attention. Right: Attention maps of global and structural attention.