Table of Contents
Fetching ...

Internalized Reasoning for Long-Context Visual Document Understanding

Austin Veselka

Abstract

Visual long-document understanding is critical for enterprise, legal, and scientific applications, yet the best performing open recipes have not explored reasoning, a capability which has driven leaps in math and code performance. We introduce a synthetic data pipeline for reasoning in long-document understanding that generates thinking traces by scoring each page for question relevance, extracting textual evidence and ordering it from most to least relevant. We apply SFT to the resulting traces within \texttt{<think>} tags, gated by a \texttt{<cot>} control token, and the resulting reasoning capability is internalized via low-strength model merging. We study Qwen3 VL 32B and Mistral Small 3.1 24B. With Qwen3 VL, we achieve 58.3 on MMLongBenchDoc, surpassing the 7$\times$ larger Qwen3 VL 235B A22B (57.0). With Mistral, we show that synthetic reasoning outperforms distillation from the Thinking version's traces by 3.8 points on MMLBD-C, and internalized reasoning exhibits 12.4$\times$ fewer mean output tokens compared to explicit reasoning. We release our pipeline for reproducibility and further exploration.

Internalized Reasoning for Long-Context Visual Document Understanding

Abstract

Visual long-document understanding is critical for enterprise, legal, and scientific applications, yet the best performing open recipes have not explored reasoning, a capability which has driven leaps in math and code performance. We introduce a synthetic data pipeline for reasoning in long-document understanding that generates thinking traces by scoring each page for question relevance, extracting textual evidence and ordering it from most to least relevant. We apply SFT to the resulting traces within \texttt{<think>} tags, gated by a \texttt{<cot>} control token, and the resulting reasoning capability is internalized via low-strength model merging. We study Qwen3 VL 32B and Mistral Small 3.1 24B. With Qwen3 VL, we achieve 58.3 on MMLongBenchDoc, surpassing the 7 larger Qwen3 VL 235B A22B (57.0). With Mistral, we show that synthetic reasoning outperforms distillation from the Thinking version's traces by 3.8 points on MMLBD-C, and internalized reasoning exhibits 12.4 fewer mean output tokens compared to explicit reasoning. We release our pipeline for reproducibility and further exploration.

Paper Structure

This paper contains 30 sections, 4 figures, 14 tables.

Figures (4)

  • Figure 1: Our proposed synthetic reasoning pipeline. For a given document and question, we extract evidence and a relevance score from each page using a VLM, then select the top $K$ pages and evidence sections, sort them, then pass them to a strong VLM and LLM respectively to generate an example's final answer. Full examples are constructed with a token controlling the presence of thinking, the document, question, synthetic reasoning trace and the final answer from either branch.
  • Figure 2: Output length distributions for Qwen and Mistral under different evaluation + train settings on MMLBD. Generally we see CoT-on produces longer-tailed distributions, but minimally compared to the $\alpha = 0.5$ model which explicitly reasons. Additionally, the control token determines output length distribution more than think vs no-think training.
  • Figure 3: An example from the v1 dataset.
  • Figure 4: An example from the v2 dataset.