...

Circuit-aware Editing Enables Generalizable Knowledge Learners


View a PDF of the paper titled CaKE: Circuit-aware Editing Enables Generalizable Knowledge Learners, by Yunzhi Yao and 6 other authors

View PDF
HTML (experimental)

Abstract:Knowledge Editing (KE) enables the modification of outdated or incorrect information in large language models (LLMs). While existing KE methods can update isolated facts, they often fail to generalize these updates to multi-hop reasoning tasks that rely on the modified knowledge. Through an analysis of reasoning circuits — the neural pathways LLMs use for knowledge-based inference, we find that current layer-localized KE approaches (e.g., MEMIT, WISE), which edit only single or a few model layers, inadequately integrate updated knowledge into these reasoning pathways. To address this limitation, we present CaKE (Circuit-aware Knowledge Editing), a novel method that enhances the effective integration of updated knowledge in LLMs. By only leveraging a few curated data samples guided by our circuit-based analysis, CaKE stimulates the model to develop appropriate reasoning circuits for newly incorporated knowledge. Experiments show that CaKE enables more accurate and consistent use of edited knowledge across related reasoning tasks, achieving an average improvement of 20% in multi-hop reasoning accuracy on the MQuAKE dataset while requiring less memory than existing KE methods. We release the code and data in this https URL.

Submission history

From: Ningyu Zhang [view email]
[v1]
Thu, 20 Mar 2025 17:14:34 UTC (1,174 KB)
[v2]
Tue, 23 Sep 2025 17:10:14 UTC (1,161 KB)
[v3]
Thu, 20 Nov 2025 01:21:10 UTC (1,163 KB)

Source link

#Circuitaware #Editing #Enables #Generalizable #Knowledge #Learners