Table of Contents
Fetching ...

Photon: Speedup Volume Understanding with Efficient Multimodal Large Language Models

Chengyu Fang, Heng Guo, Zheng Jiang, Chunming He, Xiu Li, Minfeng Xu

Abstract

Multimodal large language models are promising for clinical visual question answering tasks, but scaling to 3D imaging is hindered by high computational costs. Prior methods often rely on 2D slices or fixed-length token compression, disrupting volumetric continuity and obscuring subtle findings. We present Photon, a framework that represents 3D medical volumes with token sequences of variable length. Photon introduces instruction-conditioned token scheduling and surrogate gradient propagation to adaptively reduce tokens during both training and inference, which lowers computational cost while mitigating the attention dilution caused by redundant tokens. It incorporates a custom backpropagation rule with gradient restoration to enable differentiable optimization despite discrete token drop. To stabilize token compression and ensure reliable use of visual evidence, Photon further applies regularization objectives that mitigate language-only bias and improve reliability. Experiments on diverse medical visual question answering tasks show that Photon achieves state-of-the-art accuracy while reducing resource usage and accelerating both training and inference.

Photon: Speedup Volume Understanding with Efficient Multimodal Large Language Models

Abstract

Multimodal large language models are promising for clinical visual question answering tasks, but scaling to 3D imaging is hindered by high computational costs. Prior methods often rely on 2D slices or fixed-length token compression, disrupting volumetric continuity and obscuring subtle findings. We present Photon, a framework that represents 3D medical volumes with token sequences of variable length. Photon introduces instruction-conditioned token scheduling and surrogate gradient propagation to adaptively reduce tokens during both training and inference, which lowers computational cost while mitigating the attention dilution caused by redundant tokens. It incorporates a custom backpropagation rule with gradient restoration to enable differentiable optimization despite discrete token drop. To stabilize token compression and ensure reliable use of visual evidence, Photon further applies regularization objectives that mitigate language-only bias and improve reliability. Experiments on diverse medical visual question answering tasks show that Photon achieves state-of-the-art accuracy while reducing resource usage and accelerating both training and inference.

Paper Structure

This paper contains 35 sections, 33 equations, 7 figures, 12 tables.

Figures (7)

  • Figure 1: Photon is a 3D-native framework that adaptively models medical volumes using variable-length tokens, accelerating both training and inference. It enables efficient clinical question answering by removing instruction-irrelevant tokens while maintaining better quantitative performance.
  • Figure 2: Photon's pipeline: Phase 1 aligns the visual embedding layer, and Phase 2 finetunes all modules for task adaptation, learning token reduction threshold estimation through our backpropagation strategy. Modules in right upper with black contour are not updated during training.
  • Figure 3: Visualization of Photon's results. White regions indicate reduced tokens, while purple boxes highlight retained areas that carry clinically relevant information for answering the questions.
  • Figure 4: Visualization Results of base model and different visual alignment methods. Our method achieves alignment while avoiding mode collapse. Vis. Ful. Ft. = Visual Modules Fully Finetuned.
  • Figure 5: Visualization of retention band triggers (\ref{['eq:band']}) and the number of kept tokens. Left: training without robust regularization, middle: training without flip regularization, right: training with both regularizations. A lower activation frequency and smaller magnitude of the retention band indicate more stable pruning during training.
  • ...and 2 more figures