Table of Contents
Fetching ...

LiquiLM: Bridging the Semantic Gap in Liquidity Flaw Audit via DCN and LLMs

Zekai Liu, Xiaoqi Li, Wenkai Li, Zongwei Li

Abstract

Traditional consensus mechanisms, such as Proof of Stake (PoS), increasingly reveal an excessive dependency on large liquidity providers. Although the Proof of Liquidity (PoL) mechanism serves as a critical paradigm for incentivizing sustained liquidity provision and ensuring market stability, its transition from asset staking to active liquidity management significantly increases the complexity of underlying smart contract economic models and interaction logic. This renders hidden liquidity logic flaws difficult to detect via traditional methods, seriously threatening the system stability and user asset security of mainstream DeFi and emerging PoL ecosystems. To address this, we propose the LiquiLM framework, which integrates Large Language Models (LLMs) with a Dynamic Co-Attention Network (DCN). By establishing a dynamic interaction between liquidity-critical contracts and flaw descriptions, the framework effectively bridges the semantic gap between underlying code implementations and high-level liquidity intents. We evaluate the performance of LiquiLM on 1,490 validation contracts (covering precision, recall, specificity, and F1-score). The results show that it achieves significant effectiveness in auditing and explaining liquidity flaws: in experiments using Gemini 3 Pro and GPT-4o as backbone models, respectively, the F1-scores both exceed 90%. Furthermore, through an in-depth audit of 1,380 real-world PoL and Ethereum economic contracts, LiquiLM successfully identifies 238 high-risk contracts and assists in discovering 10 vulnerabilities that have received CVE certification.

LiquiLM: Bridging the Semantic Gap in Liquidity Flaw Audit via DCN and LLMs

Abstract

Traditional consensus mechanisms, such as Proof of Stake (PoS), increasingly reveal an excessive dependency on large liquidity providers. Although the Proof of Liquidity (PoL) mechanism serves as a critical paradigm for incentivizing sustained liquidity provision and ensuring market stability, its transition from asset staking to active liquidity management significantly increases the complexity of underlying smart contract economic models and interaction logic. This renders hidden liquidity logic flaws difficult to detect via traditional methods, seriously threatening the system stability and user asset security of mainstream DeFi and emerging PoL ecosystems. To address this, we propose the LiquiLM framework, which integrates Large Language Models (LLMs) with a Dynamic Co-Attention Network (DCN). By establishing a dynamic interaction between liquidity-critical contracts and flaw descriptions, the framework effectively bridges the semantic gap between underlying code implementations and high-level liquidity intents. We evaluate the performance of LiquiLM on 1,490 validation contracts (covering precision, recall, specificity, and F1-score). The results show that it achieves significant effectiveness in auditing and explaining liquidity flaws: in experiments using Gemini 3 Pro and GPT-4o as backbone models, respectively, the F1-scores both exceed 90%. Furthermore, through an in-depth audit of 1,380 real-world PoL and Ethereum economic contracts, LiquiLM successfully identifies 238 high-risk contracts and assists in discovering 10 vulnerabilities that have received CVE certification.

Paper Structure

This paper contains 13 sections, 7 equations, 5 figures, 6 tables, 2 algorithms.

Figures (5)

  • Figure 1: The Overall Architecture of LiquiLM.Note: The Semantic Feature Representation module slices and normalizes the target liquidity-critical contract source code to generate embedding vectors, while simultaneously constructing a liquidity defect semantic corpus. The Bidirectional Semantic Alignment module employs a DCN model to align contract slice vectors with corpus entries; following max pooling and average pooling, it generates the Audit-Informed Manifest ($\mathbb{AIM}$). Finally, the $\mathbb{AIM}$-Guided Heuristic Audit module adopts a Four-Phase Collaborative Prompt System to guide LLMs in performing an in-depth analysis of critical slices within the $\mathbb{AIM}$ and generating the final audit report.
  • Figure 2: Four-Phase Collaborative Prompt System of LiquiLM.
  • Figure 3: Performance dynamics of the DCN model during the $\mathbb{AIM}$ generation phase.Note: Shaded regions indicate the standard deviation across 5-fold cross-validation. In (a), the gradient spike near epoch 95 is a cross-validation artifact caused by a delayed fold triggering learning rate decay just prior to early stopping. In (b), the non-zero initial recall ($\approx 0.1$) stems from the positive sample weighting (pos_weight=6) strategy, which forces the model to prioritize minority flaws from the onset.
  • Figure 4: Fine-grained reliability evaluation across five liquidity flaw types.Note: Subfigures (a) and (e) display the distribution box plots of the three metrics; Subfigures (b)-(d) and (f)-(h) detail the specific performance of Precision, Recall, and F1 Score across different flaw categories, respectively.
  • Figure 5: Example of LiquiLM Audit Results.Note: For clarity, we condense the audit report content, retaining only the "reason" and "suggestion" fields from the original report.