Table of Contents
Fetching ...

Q-Mask: Query-driven Causal Masks for Text Anchoring in OCR-Oriented Vision-Language Models

Longwei Xu, Feng Feng, Shaojie Zhang, Xin Chen, Hang Li, Anan Du, Hailong Yu, Pei Fu, Zhenbo Luo, Jian Luan

Abstract

Optical Character Recognition (OCR) is increasingly regarded as a foundational capability for modern vision-language models (VLMs), enabling them not only to read text in images but also to support downstream reasoning in real-world visual question answering (VQA). However, practical applications further require reliable text anchors, i.e., accurately grounding queried text to its corresponding spatial region. To systematically evaluate this capability, we introduce TextAnchor-Bench (TABench), a benchmark for fine-grained text-region grounding, which reveals that both general-purpose and OCR-specific VLMs still struggle to establish accurate and stable text anchors. To address this limitation, we propose Q-Mask, a precise OCR framework built upon a causal query-driven mask decoder (CQMD). Inspired by chain-of-thought reasoning, Q-Mask performs causal visual decoding that sequentially generates query-conditioned visual masks before producing the final OCR output. This visual CoT paradigm disentangles where the text is from what the text is, enforcing grounded evidence acquisition prior to recognition and enabling explicit text anchor construction during inference. To train CQMD, we construct TextAnchor-26M, a large-scale dataset of image-text pairs annotated with fine-grained masks corresponding to specific textual elements, encouraging stable text-region correspondences and injecting strong spatial priors into VLM training. Extensive experiments demonstrate that Q-Mask substantially improves text anchoring and understanding across diverse visual scenes.

Q-Mask: Query-driven Causal Masks for Text Anchoring in OCR-Oriented Vision-Language Models

Abstract

Optical Character Recognition (OCR) is increasingly regarded as a foundational capability for modern vision-language models (VLMs), enabling them not only to read text in images but also to support downstream reasoning in real-world visual question answering (VQA). However, practical applications further require reliable text anchors, i.e., accurately grounding queried text to its corresponding spatial region. To systematically evaluate this capability, we introduce TextAnchor-Bench (TABench), a benchmark for fine-grained text-region grounding, which reveals that both general-purpose and OCR-specific VLMs still struggle to establish accurate and stable text anchors. To address this limitation, we propose Q-Mask, a precise OCR framework built upon a causal query-driven mask decoder (CQMD). Inspired by chain-of-thought reasoning, Q-Mask performs causal visual decoding that sequentially generates query-conditioned visual masks before producing the final OCR output. This visual CoT paradigm disentangles where the text is from what the text is, enforcing grounded evidence acquisition prior to recognition and enabling explicit text anchor construction during inference. To train CQMD, we construct TextAnchor-26M, a large-scale dataset of image-text pairs annotated with fine-grained masks corresponding to specific textual elements, encouraging stable text-region correspondences and injecting strong spatial priors into VLM training. Extensive experiments demonstrate that Q-Mask substantially improves text anchoring and understanding across diverse visual scenes.

Paper Structure

This paper contains 39 sections, 13 equations, 15 figures, 8 tables.

Figures (15)

  • Figure 1: Performance comparison of mainstream general-purpose VLMs bai2025qwen3vlteam2026kimigemini3pro2025openai2025gpt5.2 and OCR-specific VLMs wei2026deepseek on the proposed TABench.
  • Figure 2: Comparison of the training paradigm of OCR-specific VLMs. (a) Standard VLMs achiam2023gptMonkeybai2023qwenvlInstructdocwang2024qwen2vlbai2025qwen3vl can recognize text but lack explicit mechanisms to establish reliable text anchors. (b) Existing mask-based methods MarternTokenFD enhance spatial perception but fail to explicitly model text–region correspondence during inference. (c) Q-Mask introduces a causal query-driven mask decoder (CQMD) that explicitly grounds the queried text token within its spatial region prior to recognition.
  • Figure 3: Overview of the proposed architecture. After a LLM processes the concatenated visual and textual embeddings, the CQMD module extracts only the hidden states of visual tokens ($\mathbf{H}_{img}$) and query tokens ($\mathbf{H}_{q}$). Using cross-attention, Q-Mask predicts spatial masks prior to autoregressive answer generation. The model is trained with the next-token prediction (NTP) loss and a segmentation loss.
  • Figure 4: Overview of the TextAnchor-26M construction pipeline. We aggregate four data sources: (1) unconstrained scene text mined from large-scale web corpora; (2) academic documents (e.g., arXiv pages); (3) synthetic text rendered by SynthDog with multilingual fonts; and (4) VQA-with-causal-mask samples generated by prompting a VLM on precisely annotated regions. We first obtain transcripts and bounding boxes using expert models or rendering annotations, and then apply stochastic prior injection (SPI) and de-stylized mask rendering to produce unified supervision for Q-Mask training.
  • Figure 5: Empirical error profile of PPOCR-V5 on scene text. We decouple failures into box-level localization errors and character-level recognition errors. The character error rate is further decomposed into normalized proportions of insertion (Ins), deletion (Del), and substitution (Sub).
  • ...and 10 more figures