Table of Contents
Fetching ...

Distributed Multi-Layer Editing for Rule-Level Knowledge in Large Language Models

Yating Wang, Wenting Zhao, Yaqi Zhao, Yongshun Gong, Yilong Yin, Haoliang Sun

Abstract

Large language models store not only isolated facts but also rules that support reasoning across symbolic expressions, natural language explanations, and concrete instances. Yet most model editing methods are built for fact-level knowledge, assuming that a target edit can be achieved through a localized intervention. This assumption does not hold for rule-level knowledge, where a single rule must remain consistent across multiple interdependent forms. We investigate this problem through a mechanistic study of rule-level knowledge editing. To support this study, we extend the RuleEdit benchmark from 80 to 200 manually verified rules spanning mathematics and physics. Fine-grained causal tracing reveals a form-specific organization of rule knowledge in transformer layers: formulas and descriptions are concentrated in earlier layers, while instances are more associated with middle layers. These results suggest that rule knowledge is not uniformly localized, and therefore cannot be reliably edited by a single-layer or contiguous-block intervention. Based on this insight, we propose Distributed Multi-Layer Editing (DMLE), which applies a shared early-layer update to formulas and descriptions and a separate middle-layer update to instances. While remaining competitive on standard editing metrics, DMLE achieves substantially stronger rule-level editing performance. On average, it improves instance portability and rule understanding by 13.91 and 50.19 percentage points, respectively, over the strongest baseline across GPT-J-6B, Qwen2.5-7B, Qwen2-7B, and LLaMA-3-8B. The code is available at https://github.com/Pepper66/DMLE.

Distributed Multi-Layer Editing for Rule-Level Knowledge in Large Language Models

Abstract

Large language models store not only isolated facts but also rules that support reasoning across symbolic expressions, natural language explanations, and concrete instances. Yet most model editing methods are built for fact-level knowledge, assuming that a target edit can be achieved through a localized intervention. This assumption does not hold for rule-level knowledge, where a single rule must remain consistent across multiple interdependent forms. We investigate this problem through a mechanistic study of rule-level knowledge editing. To support this study, we extend the RuleEdit benchmark from 80 to 200 manually verified rules spanning mathematics and physics. Fine-grained causal tracing reveals a form-specific organization of rule knowledge in transformer layers: formulas and descriptions are concentrated in earlier layers, while instances are more associated with middle layers. These results suggest that rule knowledge is not uniformly localized, and therefore cannot be reliably edited by a single-layer or contiguous-block intervention. Based on this insight, we propose Distributed Multi-Layer Editing (DMLE), which applies a shared early-layer update to formulas and descriptions and a separate middle-layer update to instances. While remaining competitive on standard editing metrics, DMLE achieves substantially stronger rule-level editing performance. On average, it improves instance portability and rule understanding by 13.91 and 50.19 percentage points, respectively, over the strongest baseline across GPT-J-6B, Qwen2.5-7B, Qwen2-7B, and LLaMA-3-8B. The code is available at https://github.com/Pepper66/DMLE.

Paper Structure

This paper contains 28 sections, 4 equations, 3 figures, 3 tables.

Figures (3)

  • Figure 1: Causal tracing heatmaps on GPT-J-6B. The heatmap shows the average indirect effect (AIE) across MLP layers and token positions for formula, description, and instance. 1st sub and Last sub denote the first and last subject tokens, and Last the final prompt token. Formula and description peak in earlier layers, whereas instance peaks in middle layers.
  • Figure 2: Causal tracing heatmaps on Qwen2-7B, Qwen2.5-7B, and LLaMA-3-8B. Across models, formula and description peak in earlier layers, whereas instance peaks in middle layers, supporting a form-specific layer-wise organization of rule knowledge.
  • Figure 3: Overview of DMLE. Given a rule with three aligned forms, formula and description share an update $\Delta W_{fd}$, while instance uses a separate update $\Delta W_i$. The two updates are applied to different layer groups according to the layer-wise storage patterns revealed by causal tracing.