Table of Contents
Fetching ...

HyperCT: Low-Rank Hypernet for Unified Chest CT Analysis

Fengbei Liu, Sunwoo Kwak, Hao Phung, Nusrat Binta Nizam, Ilan Richter, Nir Uriel, Hadar Averbuch-Elor, Daborah Estrin, Mert R. Sabuncu

Abstract

Non-contrast chest CTs offer a rich opportunity for both conventional pulmonary and opportunistic extra-pulmonary screening. While Multi-Task Learning (MTL) can unify these diverse tasks, standard hard-parameter sharing approaches are often suboptimal for modeling distinct pathologies. We propose HyperCT, a framework that dynamically adapts a Vision Transformer backbone via a Hypernetwork. To ensure computational efficiency, we integrate Low-Rank Adaptation (LoRA), allowing the model to regress task-specific low-rank weight updates rather than full parameters. Validated on a large-scale dataset of radiological and cardiological tasks, \method{} outperforms various strong baselines, offering a unified, parameter-efficient solution for holistic patient assessment. Our code is available at https://github.com/lfb-1/HyperCT.

HyperCT: Low-Rank Hypernet for Unified Chest CT Analysis

Abstract

Non-contrast chest CTs offer a rich opportunity for both conventional pulmonary and opportunistic extra-pulmonary screening. While Multi-Task Learning (MTL) can unify these diverse tasks, standard hard-parameter sharing approaches are often suboptimal for modeling distinct pathologies. We propose HyperCT, a framework that dynamically adapts a Vision Transformer backbone via a Hypernetwork. To ensure computational efficiency, we integrate Low-Rank Adaptation (LoRA), allowing the model to regress task-specific low-rank weight updates rather than full parameters. Validated on a large-scale dataset of radiological and cardiological tasks, \method{} outperforms various strong baselines, offering a unified, parameter-efficient solution for holistic patient assessment. Our code is available at https://github.com/lfb-1/HyperCT.

Paper Structure

This paper contains 36 sections, 4 equations, 11 figures, 15 tables.

Figures (11)

  • Figure 1: Overview of HyperCT. Given a set of learnable task embeddings, e.g., $\{\mathbf{e}^1, \mathbf{e}^2, \mathbf{e}^3\}$, a hypernet $h$ produces task-specific weight adjustments $\Delta \mathbf{W}^1, \Delta \mathbf{W}^2, \Delta \mathbf{W}^3$, which modulate the weights of the base model. The base model, produces task-specific predictions $\{\hat{\mathbf{y}}^1, \hat{\mathbf{y}}^2, \hat{\mathbf{y}}^3\}$. These outputs are compared with ground-truth task labels $\{{\mathbf{y}}^1, {\mathbf{y}}^2, {\mathbf{y}}^3 \}$ via Binary Cross-Entropy Loss.
  • Figure 2: Decision Curve Analysis on CU prospective cohort for all 7 opportunistic cardiac tasks. HyperCT (blue) shows positive net benefit above "treat all" (orange) and "treat none" (gray) baselines across clinically relevant thresholds (5-80%).
  • Figure 3: Decision Curve Analysis on WCM prospective cohort (external validation). HyperCT demonstrates consistent clinical utility across institutions, with positive net benefit maintained for all 7 opportunistic tasks.
  • Figure 4: Principle Component Analysis (PCA) of task-specific LoRA. Blue is opportunistic labels and orange is conventional labels. Number is the index of labels.
  • Figure 5: Saliency maps generated using Grad-CAM for different tasks. First row is opportunistic screening tasks and second row is part of the conventional screening tasks.
  • ...and 6 more figures