Table of Contents
Fetching ...

MixTex: Unambiguous Recognition Should Not Rely Solely on Real Data

Renqing Luo, Yuhan Xu

TL;DR

This work addresses bias in LaTeX OCR by combining a Swin Transformer encoder with a RoBERTa decoder in an end-to-end model called MixTex and introducing a novel data augmentation strategy that injects pseudo-text and formulas to form a mixed multilingual dataset of approximately $120\mathrm{M}$ tokens. The approach preserves image-grounded recognition while reducing reliance on contextual priors, and is evaluated across typo-perturbed and typo-free scenarios, showing improved bias mitigation and accuracy compared to baselines. Key findings include that the Mixed dataset yields a better balance between encoder feature utilization and contextual decoding, enabling robust recognition of both clear and ambiguous content. The method also suggests broader applicability to other disambiguation tasks, such as handwriting, music notation, and educational contexts, where unambiguous recognition is crucial.

Abstract

This paper introduces MixTex, an end-to-end LaTeX OCR model designed for low-bias multilingual recognition, along with its novel data collection method. In applying Transformer architectures to LaTeX text recognition, we identified specific bias issues, such as the frequent misinterpretation of $e-t$ as $e^{-t}$. We attribute this bias to the characteristics of the arXiv dataset commonly used for training. To mitigate this bias, we propose an innovative data augmentation method. This approach introduces controlled noise into the recognition targets by blending genuine text with pseudo-text and incorporating a small proportion of disruptive characters. We further suggest that this method has broader applicability to various disambiguation recognition tasks, including the accurate identification of erroneous notes in musical performances. MixTex's architecture leverages the Swin Transformer as its encoder and RoBERTa as its decoder. Our experimental results demonstrate that this approach significantly reduces bias in recognition tasks. Notably, when processing clear and unambiguous images, the model adheres strictly to the image rather than over-relying on contextual cues for token prediction.

MixTex: Unambiguous Recognition Should Not Rely Solely on Real Data

TL;DR

This work addresses bias in LaTeX OCR by combining a Swin Transformer encoder with a RoBERTa decoder in an end-to-end model called MixTex and introducing a novel data augmentation strategy that injects pseudo-text and formulas to form a mixed multilingual dataset of approximately tokens. The approach preserves image-grounded recognition while reducing reliance on contextual priors, and is evaluated across typo-perturbed and typo-free scenarios, showing improved bias mitigation and accuracy compared to baselines. Key findings include that the Mixed dataset yields a better balance between encoder feature utilization and contextual decoding, enabling robust recognition of both clear and ambiguous content. The method also suggests broader applicability to other disambiguation tasks, such as handwriting, music notation, and educational contexts, where unambiguous recognition is crucial.

Abstract

This paper introduces MixTex, an end-to-end LaTeX OCR model designed for low-bias multilingual recognition, along with its novel data collection method. In applying Transformer architectures to LaTeX text recognition, we identified specific bias issues, such as the frequent misinterpretation of as . We attribute this bias to the characteristics of the arXiv dataset commonly used for training. To mitigate this bias, we propose an innovative data augmentation method. This approach introduces controlled noise into the recognition targets by blending genuine text with pseudo-text and incorporating a small proportion of disruptive characters. We further suggest that this method has broader applicability to various disambiguation recognition tasks, including the accurate identification of erroneous notes in musical performances. MixTex's architecture leverages the Swin Transformer as its encoder and RoBERTa as its decoder. Our experimental results demonstrate that this approach significantly reduces bias in recognition tasks. Notably, when processing clear and unambiguous images, the model adheres strictly to the image rather than over-relying on contextual cues for token prediction.

Paper Structure

This paper contains 7 sections, 3 figures, 3 tables.

Figures (3)

  • Figure 1: The Training Data Sample. The non-highlighted portions represent an excerpt from authentic text. Words randomly inserted into the text are highlighted in red, while misspelled words derived from the original text with their letters scrambled are highlighted in pink. Randomly inserted inline mathematical formulas appear in light blue. Red boxes contain pseudo-formulas, while blue boxes enclose genuine mathematical expressions.
  • Figure 2: The MixTex Pipeline. The process begins with an input LaTeX document image, which is fed into a Swin Transformer encoder. This encoder maps the image into embeddings, effectively capturing both local and global features. Subsequently, these embeddings are processed by a RoBERTa decoder, which translates them into LaTeX formatted text.
  • Figure 3: Examples of data augmentation applied to original Mixed training samples.