Table of Contents
Fetching ...

A Generative Foundation Model for Multimodal Histopathology

Jinxi Xiang, Mingjie Li, Siyu Hou, Yijiang Chen, Xiangde Luo, Yuanfeng Ji, Xiang Zhou, Ehsan Adeli, Akshay Chaudhari, Curtis P. Langlotz, Kilian M. Pohl, Ruijiang Li

Abstract

Accurate diagnosis and treatment of complex diseases require integrating histological, molecular, and clinical data, yet in practice these modalities are often incomplete owing to tissue scarcity, assay cost, and workflow constraints. Existing computational approaches attempt to impute missing modalities from available data but rely on task-specific models trained on narrow, single source-target pairs, limiting their generalizability. Here we introduce MuPD (Multimodal Pathology Diffusion), a generative foundation model that embeds hematoxylin and eosin (H&E)-stained histology, molecular RNA profiles, and clinical text into a shared latent space through a diffusion transformer with decoupled cross-modal attention. Pretrained on 100 million histology image patches, 1.6 million text-histology pairs, and 10.8 million RNA-histology pairs spanning 34 human organs, MuPD supports diverse cross-modal synthesis tasks with minimal or no task-specific fine-tuning. For text-conditioned and image-to-image generation, MuPD synthesizes histologically faithful tissue architectures, reducing Fréchet inception distance (FID) scores by 50% relative to domain-specific models and improving few-shot classification accuracy by up to 47% through synthetic data augmentation. For RNA-conditioned histology generation, MuPD reduces FID by 23% compared with the next-best method while preserving cell-type distributions across five cancer types. As a virtual stainer, MuPD translates H&E images to immunohistochemistry and multiplex immunofluorescence, improving average marker correlation by 37% over existing approaches. These results demonstrate that a single, unified generative model pretrained across heterogeneous pathology modalities can substantially outperform specialized alternatives, providing a scalable computational framework for multimodal histopathology.

A Generative Foundation Model for Multimodal Histopathology

Abstract

Accurate diagnosis and treatment of complex diseases require integrating histological, molecular, and clinical data, yet in practice these modalities are often incomplete owing to tissue scarcity, assay cost, and workflow constraints. Existing computational approaches attempt to impute missing modalities from available data but rely on task-specific models trained on narrow, single source-target pairs, limiting their generalizability. Here we introduce MuPD (Multimodal Pathology Diffusion), a generative foundation model that embeds hematoxylin and eosin (H&E)-stained histology, molecular RNA profiles, and clinical text into a shared latent space through a diffusion transformer with decoupled cross-modal attention. Pretrained on 100 million histology image patches, 1.6 million text-histology pairs, and 10.8 million RNA-histology pairs spanning 34 human organs, MuPD supports diverse cross-modal synthesis tasks with minimal or no task-specific fine-tuning. For text-conditioned and image-to-image generation, MuPD synthesizes histologically faithful tissue architectures, reducing Fréchet inception distance (FID) scores by 50% relative to domain-specific models and improving few-shot classification accuracy by up to 47% through synthetic data augmentation. For RNA-conditioned histology generation, MuPD reduces FID by 23% compared with the next-best method while preserving cell-type distributions across five cancer types. As a virtual stainer, MuPD translates H&E images to immunohistochemistry and multiplex immunofluorescence, improving average marker correlation by 37% over existing approaches. These results demonstrate that a single, unified generative model pretrained across heterogeneous pathology modalities can substantially outperform specialized alternatives, providing a scalable computational framework for multimodal histopathology.

Paper Structure

This paper contains 5 equations, 12 figures, 1 table.

Figures (12)

  • Figure 1: Study Overview.a, MUPAD framework. H&E-stained histology serves as a bridging modality that integrates multi-scale data, from molecular transcriptomics and proteomics to tissue architecture and clinical text, enabling cross-modal generation across physical scales. b, Pretraining dataset, spanning 34 organs and comprising 100 million H&E image patches, 1.6 million text-image pairs, and 10.8 million RNA--image pairs. c, The architecture of MUPAD employs DiT modules with decoupled cross-modal attention, which process image, text, and RNA conditioning signals in parallel streams. We use MUSK xiang2025vision to provide text/image guidance and Virchow2 zimmermann2024virchow2 for feature distillation during pretraining. d, Benchmarking of MUPAD against other methods. We report Relative FID (normalized to MUPAD = 1.00) where lower is better; AUC for HE2IHC and Pearson Correlation for HE2mIF (higher is better for both).
  • Figure 1: Fresh frozen to FFPE image translation results. Visual comparison of translation quality on Lung and Brain tissues. MUPAD successfully suppresses cryo-artifacts typical of fresh frozen sections while synthesizing realistic H&E staining characteristics. Baselines often suffer from saturation (AI-FFPE) or residual noise (CUT, CycleGAN). Quantitative evaluation showing MUPAD achieves the lowest FID (lower is better).
  • Figure 2: Image generation conditioned with image or text prompts.a, Image-to-image generation. Representative examples and quantitative benchmarks demonstrate that MUPAD preserves authentic biological structures with greater fidelity than competing baselines, achieving superior Image--Image similarity and FID. b, Text-to-image generation. Visual examples illustrate that MUPAD accurately reconstructs fine-grained histological features from text prompts. MUPAD achieves superior performance across Image--Image similarity, Text--Image alignment, and FID. All metrics are presented as median values with 95% bootstrap confidence intervals. $\uparrow$ higher is better; $\downarrow$ lower is better.
  • Figure 2: Generating H&E images from spatial transcriptomic. Synthetic samples preserve spatially resolved cell-type distributions for DLBCL, GBM, mesothelioma, and ovarian. The Wasserstein distance (W) quantifies the distribution alignment of real and synthetic. Lower is better.
  • Figure 3: Training data augmentation using MUPAD.a, Few-shot classification augmented with MUPAD via image-to-image generation. Augmenting with MUPAD-synthesised morphological variants consistently improves classification accuracy across both 5-shot and 10-shot settings on five evaluated datasets, demonstrating robust generalisation under data-scarce conditions. b, Pathology text--image retrieval augmented with MUPAD-generated image--text pairs. Fine-tuning a vanilla CLIP model on these synthetic pairs substantially improves both text-to-image (T2I) and image-to-text (I2T) retrieval at R@10 and R@50.
  • ...and 7 more figures