Table of Contents
Fetching ...

De Jure: Iterative LLM Self-Refinement for Structured Extraction of Regulatory Rules

Keerat Guliani, Deepkamal Gill, David Landsman, Nima Eshraghi, Krishna Kumar, Lovedeep Gondara

Abstract

Regulatory documents encode legally binding obligations that LLM-based systems must respect. Yet converting dense, hierarchically structured legal text into machine-readable rules remains a costly, expert-intensive process. We present De Jure, a fully automated, domain-agnostic pipeline for extracting structured regulatory rules from raw documents, requiring no human annotation, domain-specific prompting, or annotated gold data. De Jure operates through four sequential stages: normalization of source documents into structured Markdown; LLM-driven semantic decomposition into structured rule units; multi-criteria LLM-as-a-judge evaluation across 19 dimensions spanning metadata, definitions, and rule semantics; and iterative repair of low-scoring extractions within a bounded regeneration budget, where upstream components are repaired before rule units are evaluated. We evaluate De Jure across four models on three regulatory corpora spanning finance, healthcare, and AI governance. On the finance domain, De Jure yields consistent and monotonic improvement in extraction quality, reaching peak performance within three judge-guided iterations. De Jure generalizes effectively to healthcare and AI governance, maintaining high performance across both open- and closed-source models. In a downstream compliance question-answering evaluation via RAG, responses grounded in De Jure extracted rules are preferred over prior work in 73.8% of cases at single-rule retrieval depth, rising to 84.0% under broader retrieval, confirming that extraction fidelity translates directly into downstream utility. These results demonstrate that explicit, interpretable evaluation criteria can substitute for human annotation in complex regulatory domains, offering a scalable and auditable path toward regulation-grounded LLM alignment.

De Jure: Iterative LLM Self-Refinement for Structured Extraction of Regulatory Rules

Abstract

Regulatory documents encode legally binding obligations that LLM-based systems must respect. Yet converting dense, hierarchically structured legal text into machine-readable rules remains a costly, expert-intensive process. We present De Jure, a fully automated, domain-agnostic pipeline for extracting structured regulatory rules from raw documents, requiring no human annotation, domain-specific prompting, or annotated gold data. De Jure operates through four sequential stages: normalization of source documents into structured Markdown; LLM-driven semantic decomposition into structured rule units; multi-criteria LLM-as-a-judge evaluation across 19 dimensions spanning metadata, definitions, and rule semantics; and iterative repair of low-scoring extractions within a bounded regeneration budget, where upstream components are repaired before rule units are evaluated. We evaluate De Jure across four models on three regulatory corpora spanning finance, healthcare, and AI governance. On the finance domain, De Jure yields consistent and monotonic improvement in extraction quality, reaching peak performance within three judge-guided iterations. De Jure generalizes effectively to healthcare and AI governance, maintaining high performance across both open- and closed-source models. In a downstream compliance question-answering evaluation via RAG, responses grounded in De Jure extracted rules are preferred over prior work in 73.8% of cases at single-rule retrieval depth, rising to 84.0% under broader retrieval, confirming that extraction fidelity translates directly into downstream utility. These results demonstrate that explicit, interpretable evaluation criteria can substitute for human annotation in complex regulatory domains, offering a scalable and auditable path toward regulation-grounded LLM alignment.

Paper Structure

This paper contains 50 sections, 3 figures, 15 tables, 1 algorithm.

Figures (3)

  • Figure 1: De Jure pipeline overview. Input documents are pre-processed into structured Markdown (Stage 1), parsed into JSON rule units via a domain-agnostic LLM prompt (Stage 2), scored by an LLM judge across 19 criteria in three stages (Stage 3), and iteratively repaired until the per-stage average score reaches 90% or the retry budget (max 3) is exhausted (Stage 4).
  • Figure 2: Average quality score per pipeline step as a function of retry budget $r$ (scale 1--5; higher is better). Steps 1 and 3 remain largely flat throughout, while Step 2 exhibits a sharp threshold effect: negligible gain from $r{=}0$ to $r{=}1$, followed by a 1.25-point (25%) recovery at $r{=}2$. The shaded region marks the negligible-gain zone. All steps saturate beyond $r{=}2$.
  • Figure 3: De Jure applied to HIPAA § 164.306. Panels (c)→(d)→(e) read left-to-right across the bottom row. Panels (a)--(b) show the raw PDF source and its pre-processed Markdown. Panel (c) shows the initial extraction with two field-level defects: an incomplete label and a misclassified rule type. Panel (d) shows the judge evaluation (avg. 0.55, fail) with per-criterion scores and targeted critiques. Panel (e) shows the corrected extraction after a single repair iteration (avg. 0.90, pass), with only deficient fields revised and all others preserved.