Table of Contents
Fetching ...

MarkushGrapher-2: End-to-end Multimodal Recognition of Chemical Structures

Tim Strohmeyer, Lucas Morin, Gerhard Ingmar Meijer, Valéry Weber, Ahmed Nassar, Peter Staar

Abstract

Automatically extracting chemical structures from documents is essential for the large-scale analysis of the literature in chemistry. Automatic pipelines have been developed to recognize molecules represented either in figures or in text independently. However, methods for recognizing chemical structures from multimodal descriptions (Markush structures) lag behind in precision and cannot be used for automatic large-scale processing. In this work, we present MarkushGrapher-2, an end-to-end approach for the multimodal recognition of chemical structures in documents. First, our method employs a dedicated OCR model to extract text from chemical images. Second, the text, image, and layout information are jointly encoded through a Vision-Text-Layout encoder and an Optical Chemical Structure Recognition vision encoder. Finally, the resulting encodings are effectively fused through a two-stage training strategy and used to auto-regressively generate a representation of the Markush structure. To address the lack of training data, we introduce an automatic pipeline for constructing a large-scale dataset of real-world Markush structures. In addition, we present IP5-M, a large manually-annotated benchmark of real-world Markush structures, designed to advance research on this challenging task. Extensive experiments show that our approach substantially outperforms state-of-the-art models in multimodal Markush structure recognition, while maintaining strong performance in molecule structure recognition. Code, models, and datasets are released publicly.

MarkushGrapher-2: End-to-end Multimodal Recognition of Chemical Structures

Abstract

Automatically extracting chemical structures from documents is essential for the large-scale analysis of the literature in chemistry. Automatic pipelines have been developed to recognize molecules represented either in figures or in text independently. However, methods for recognizing chemical structures from multimodal descriptions (Markush structures) lag behind in precision and cannot be used for automatic large-scale processing. In this work, we present MarkushGrapher-2, an end-to-end approach for the multimodal recognition of chemical structures in documents. First, our method employs a dedicated OCR model to extract text from chemical images. Second, the text, image, and layout information are jointly encoded through a Vision-Text-Layout encoder and an Optical Chemical Structure Recognition vision encoder. Finally, the resulting encodings are effectively fused through a two-stage training strategy and used to auto-regressively generate a representation of the Markush structure. To address the lack of training data, we introduce an automatic pipeline for constructing a large-scale dataset of real-world Markush structures. In addition, we present IP5-M, a large manually-annotated benchmark of real-world Markush structures, designed to advance research on this challenging task. Extensive experiments show that our approach substantially outperforms state-of-the-art models in multimodal Markush structure recognition, while maintaining strong performance in molecule structure recognition. Code, models, and datasets are released publicly.

Paper Structure

This paper contains 23 sections, 12 figures, 12 tables.

Figures (12)

  • Figure 1: Model Use Case: MarkushGrapher-2 parses Markush backbones and variable regions from document image crops via joint multimodal encoding of vision, text, and layout.
  • Figure 2: Model Architecture: MarkushGraher-2 employs two complementary encoding pipelines. In the first pipeline, the input image is processed by a vision encoder (blue) followed by an MLP projector (yellow). In the second pipeline, the image is passed through an OCR model to extract textual content and bounding boxes, which is then fed into a Vision–Text–Layout (VTL) encoder together with the original image. The output of the MLP projector (e1) is concatenated with the resulting VTL embedding (e2). The combined representation is passed to a text decoder to generate a sequential description of the Markush structure and its substituents in tabular form.
  • Figure 3: Two-Phase Training: In Phase 1 (Adaptation), the OCSR encoder is frozen while the MLP projector and text decoder are trained for SMILES prediction to align with pretrained OCSR features. In Phase 2 (Fusion), the adapted modules are initialized, the VTL encoder is introduced, and the full model is trained end-to-end for CXSMILES prediction.
  • Figure 4: OCR - Qualitative Comparison: Comparison of OCR predictions by three models PaddleOCR v5, EasyOCR, and ChemicalOCR (Ours) for an exemplary chemical structure from the benchmarks M2S, USPTO-M, and IP5-M. Red labels indicate incorrect OCR, green labels indicate correct OCR and blue indicates predicted bounding boxes.
  • Figure 5: Markush Structure Recognition - Qualitative Comparison: Comparison of Markush structure predictions by five models MarkushGrapher-2 (Ours), MarkushGrapher-1, MolParser, DeepSeek OCR, and GPT-5 for an exemplary Markush structure from the benchmarks M2S, USPTO-M, WildMol-M, and IP5-M. Red labels indicate incorrect predictions, green labels indicate correct predictions.
  • ...and 7 more figures