Table of Contents
Fetching ...

AnyDoc: Enhancing Document Generation via Large-Scale HTML/CSS Data Synthesis and Height-Aware Reinforcement Optimization

Jiawei Lin, Wanrong Zhu, Vlad I Morariu, Christopher Tensmeyer

Abstract

Document generation has gained growing attention in the field of AI-driven content creation. In this work, we push its boundaries by introducing AnyDoc, a framework capable of handling multiple generation tasks across a wide spectrum of document categories, all represented in a unified HTML/CSS format. To overcome the limited coverage and scale of existing human-crafted document datasets, AnyDoc first establishes a scalable data synthesis pipeline to automatically generate documents in HTML/CSS form. This pipeline yields DocHTML, a large-scale dataset containing 265,206 document samples, while spanning 111 categories and 32 distinct styles. Additionally, all documents are equipped with comprehensive metadata, including design intentions, HTML/CSS source code, visual assets, and rendered screenshots. Building on the curated dataset, AnyDoc fine-tunes multi-modal large language models (MLLMs) to achieve three practical document generation tasks: intention-to-document, document derendering, and element-to-document. To address the content overflow issue observed during fine-tuning, AnyDoc further incorporates a height-aware reinforcement learning (HARL) post-training procedure. By defining a reward function based on the difference between predicted and target document heights, overflow is penalized and gradually mitigated during HARL, thereby enhancing overall performance. Qualitative and quantitative experiments demonstrate that AnyDoc outperforms both general-purpose MLLMs and task-specific baselines across all three tasks.

AnyDoc: Enhancing Document Generation via Large-Scale HTML/CSS Data Synthesis and Height-Aware Reinforcement Optimization

Abstract

Document generation has gained growing attention in the field of AI-driven content creation. In this work, we push its boundaries by introducing AnyDoc, a framework capable of handling multiple generation tasks across a wide spectrum of document categories, all represented in a unified HTML/CSS format. To overcome the limited coverage and scale of existing human-crafted document datasets, AnyDoc first establishes a scalable data synthesis pipeline to automatically generate documents in HTML/CSS form. This pipeline yields DocHTML, a large-scale dataset containing 265,206 document samples, while spanning 111 categories and 32 distinct styles. Additionally, all documents are equipped with comprehensive metadata, including design intentions, HTML/CSS source code, visual assets, and rendered screenshots. Building on the curated dataset, AnyDoc fine-tunes multi-modal large language models (MLLMs) to achieve three practical document generation tasks: intention-to-document, document derendering, and element-to-document. To address the content overflow issue observed during fine-tuning, AnyDoc further incorporates a height-aware reinforcement learning (HARL) post-training procedure. By defining a reward function based on the difference between predicted and target document heights, overflow is penalized and gradually mitigated during HARL, thereby enhancing overall performance. Qualitative and quantitative experiments demonstrate that AnyDoc outperforms both general-purpose MLLMs and task-specific baselines across all three tasks.

Paper Structure

This paper contains 21 sections, 1 equation, 12 figures, 4 tables.

Figures (12)

  • Figure 1: (a) AnyDoc excels at generating layered documents from diverse input modalities, including natural language design intentions, screenshots to be derendered, or collections of text and image elements. (b) Our curated DocHTML covers a broad range of document categories and styles.
  • Figure 2: Existing approaches generate documents as either (a) raster images or (b) flat element coordinate sequences. (c) In AnyDoc, we generate documents with a hierarchical and multi-layered HTML/CSS representation.
  • Figure 3: Overview of the HTML/CSS document synthesis pipeline. Starting from a human-crafted document and its accompanying metadata, we employ MLLMs to generate semantic annotations, i.e., the document’s intention and description. Based on the conditions, a code generation model and an image generation model are sequentially used to synthesize corresponding HTML/CSS code and image assets, which are finally rendered into the complete document screenshot.
  • Figure 4: The general multi-modal large language models (MLLMs) produce low-quality documents. Through supervised fine-tuning (SFT) on DocHTML, the model shows powerful document generation capabilities but still suffers from content overflow. Height-Aware Reinforcement Learning (HARL) is then adopted to overcome this issue while maintaining high visual quality.
  • Figure 5: Qualitative results on the intention-to-document task. The corresponding input conditions (i.e., intention, category, and style) are displayed below each example. For reference, the ground truth (GT) documents are also visualized.
  • ...and 7 more figures