Table of Contents
Fetching ...

FVRuleLearner: Operator-Level Reasoning Tree (OP-Tree)-Based Rules Learning for Formal Verification

Lily Jiaxin Wan, Chia-Tung Ho, Yunsheng Bai, Cunxi Yu, Deming Chen, Haoxing Ren

Abstract

The remarkable reasoning and code generation capabilities of large language models (LLMs) have recently motivated increasing interest in automating formal verification (FV), a process that ensures hardware correctness through mathematically precise assertions but remains highly labor-intensive, particularly through the translation of natural language into SystemVerilog Assertions (NL-to-SVA). However, LLMs still struggle with SVA generation due to limited training data and the intrinsic complexity of FV operators. Consequently, a more efficient and robust methodology for ensuring correct SVA operator selection is essential for producing functionally correct assertions. To address these challenges, we introduce FVRuleLearner, an Operator-Level Rule (Op-Rule) learning framework built on a novel Operator Reasoning Tree (OP-Tree), which models SVA generation as structured, interpretable reasoning. FVRuleLearner operates in two complementary phases: (1) Training: it constructs OP-Tree that decomposes NL-to-SVA alignment into fine-grained, operator-aware questions, combining reasoning paths that lead to correct assertions; and (2) Testing: it performs operator-aligned retrieval to fetch relevant reasoning traces from the learned OP-Tree and generate new rules for unseen specifications. In the comprehensive studies, the proposed FVRuleLearner outperforms the state-of-the-art baseline by 3.95% in syntax correctness and by 31.17% in functional correctness on average. Moreover, FVRuleLearner successfully reduces an average of 70.33% of SVA functional failures across diverse operator categories through a functional taxonomy analysis, showing the effectiveness of applying learned OP-Tree to the Op-Rule generations for unseen NL-to-SVA tasks. These results establish FVRuleLearner as a new paradigm for domain-specific reasoning and rule learning in formal verification.

FVRuleLearner: Operator-Level Reasoning Tree (OP-Tree)-Based Rules Learning for Formal Verification

Abstract

The remarkable reasoning and code generation capabilities of large language models (LLMs) have recently motivated increasing interest in automating formal verification (FV), a process that ensures hardware correctness through mathematically precise assertions but remains highly labor-intensive, particularly through the translation of natural language into SystemVerilog Assertions (NL-to-SVA). However, LLMs still struggle with SVA generation due to limited training data and the intrinsic complexity of FV operators. Consequently, a more efficient and robust methodology for ensuring correct SVA operator selection is essential for producing functionally correct assertions. To address these challenges, we introduce FVRuleLearner, an Operator-Level Rule (Op-Rule) learning framework built on a novel Operator Reasoning Tree (OP-Tree), which models SVA generation as structured, interpretable reasoning. FVRuleLearner operates in two complementary phases: (1) Training: it constructs OP-Tree that decomposes NL-to-SVA alignment into fine-grained, operator-aware questions, combining reasoning paths that lead to correct assertions; and (2) Testing: it performs operator-aligned retrieval to fetch relevant reasoning traces from the learned OP-Tree and generate new rules for unseen specifications. In the comprehensive studies, the proposed FVRuleLearner outperforms the state-of-the-art baseline by 3.95% in syntax correctness and by 31.17% in functional correctness on average. Moreover, FVRuleLearner successfully reduces an average of 70.33% of SVA functional failures across diverse operator categories through a functional taxonomy analysis, showing the effectiveness of applying learned OP-Tree to the Op-Rule generations for unseen NL-to-SVA tasks. These results establish FVRuleLearner as a new paradigm for domain-specific reasoning and rule learning in formal verification.

Paper Structure

This paper contains 19 sections, 4 equations, 6 figures, 4 tables.

Figures (6)

  • Figure 1: Given a NL-to-SVA task, LLM inference often produces functionally incorrect SVAs, whereas our FVRuleLearner framework leverages learned Op-Tree to generate functionally correct SVAs.
  • Figure 2: Operator-level functional mismatch analysis between generated and golden SVAs across three datasets. Temporal operators dominate the errors, highlighting the need for operator-aware reasoning.
  • Figure 3: Comparison between General Rules and Op-Rules.
  • Figure 4: The flow overview of FVRuleLearner and an illustration of Op-Tree in training. (a) Training phase: The framework leverages Op-Tree to consolidate the operator-level reasoning traces and they are stored in $\mathcal{T}^*$. (b) Testing phase: FVRuleLearner firstly generates initial SVA ($\hat{y}_{init}$) and retrieve relevant Op-Tree from $\mathcal{T}^*$ according to the similarity score of natural language specification of testing data and the background nodes' natural language content. Then, the candidate reasoning traces are selected based on the proposed operator alignment and semantic applicability scores. Lastly, the rule adaption step generate Op-Rules for final SVA ($\hat{y}$). (c) An illustrative example of the Op-Tree reasoning process, showing how hierarchical decomposition links design context to theoretical definitions to produce a verified correction rule.
  • Figure 5: Comparison of fixing ratio trends over training iterations for FVRuleLearner (circle) versus General Rule Learning (square) methods. Across (a) NL2SVA-Human, (b) NL2SVA-Machine, and (c) NL2SVA-OpenCore, the proposed Op-Tree rules learning methodology consistently accelerates convergence and achieves higher final accuracy, improving the fixing rate by an average of 23.33%.
  • ...and 1 more figures