Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue

Accepted by EMNLP 2024 (Main Track)🎉

Jia-Chen Gu1 Hao-Xiang Xu2 Jun-Yu Ma2 Pan Lu1 Zhen-Hua Ling2 Kai-Wei Chang1 Nanyun Peng1
1University of California, Los Angeles   2University of Science and Technology of China

Abstract

Model editing is a technique that edits the large language models (LLMs) with updated knowledge to alleviate hallucinations without resource-intensive retraining. While current model editing methods can effectively modify a model’s behavior within a specific area of interest, they often overlook the potential unintended side effects on the general abilities of LLMs such as reasoning, natural language inference, and question answering. In this paper, we raise concerns that model editing’s improvements on factuality may come at the cost of a significant degradation of the model’s general abilities. We systematically analyze the side effects by evaluating four popular editing methods on three LLMs across eight representative tasks. Our extensive empirical experiments show that it is challenging for current editing methods to simultaneously improve factuality of LLMs and maintain their general abilities. Our analysis reveals that the side effects are caused by model editing altering the original model weights excessively, leading to overfitting to the edited facts. To mitigate this, a method named RECT is proposed to regularize the edit update weights by imposing constraints on their complexity based on the RElative Change in weighT. Evaluation results show that RECT can significantly mitigate the side effects of editing while still maintaining over 94% editing performance.


Model Editing: Definition and Evaluation

Model editing aims to efficiently change model behavior without affecting other inputs. The effectiveness is evaluated by:

Generalization: recall the fact under the in-scope paraphrase prompts. E.g., Who currently holds the office of President of the United States?

Locality: remain unchanged for the prompts out of the editing scope. E.g., Who is the president of France?

Side Effects of Model Editing

In this work, we ask:

Does model editing's improvements on factuality come at the cost of a significant degradation of the model's general abilities?

 

Our research shows that LLMs are not robust to weight perturbations.

Evaluation Paradigm

Single- vs. Sequential-editing: Make a single editing operation or multiple editing operations successively.

Instance- vs. Batch-editing: Use only one instance or multiple instances per editing operation.

Zero-shot performance of unedited and edited models on a variety of tasks.

Evaluation Setup

Editing Methods: KN, MEND, ROME and MEMIT (only MEND and MEMIT support batch-editing).

Editing Dataset: Zero-Shot Relation Extraction (ZSRE).

Selected LLMs: GPT-2 XL (1.5B), LLaMA-1 (7B), and LLaMA-2 (7B).

Downstream Tasks, Datasets and Metrics:

  • Reasoning: GSM8K (solve rate)
  • NLI: RTE (accuracy)
  • Open-domain QA: Natural Question (EM)
  • Closed-domain QA: BoolQ (EM)
  • Dialogue: MuTual (Recall4@1)
  • Summarization: SAMSum (ROUGE)
  • NER: CoNLL03 (entity-level F1)
  • Sentiment analysis: SST2 (accuracy)

Evaluation Results

Impact of Sequential-editing

  • Although there is only one instance per editing operation, the performance of edited models on various tasks fluctuates significantly and shows a downward trend as the number of edits increases. Strikingly, the use of KN resulted in a drastic performance degradation to nearly zero on all selected tasks with just a single edit.
  • The selected LLMs are not robust to weight perturbations even if less than 1% of the parameters are edited, whereby slight perturbations may significantly affect their general abilities.
  • The difficulty of effectively coupling current editing algorithms with LLMs lies in the dual objective of improving model factuality while simultaneously maintaining their general abilities.

Impact of Batch-editing

  • Even with only one single editing operation, edited models exhibited a trend of performance degradation as the batch size increases in most cases.
  • Underline the sensitivity of the models to increases in batch size, calling for more research work on scalable editing to facilitate efficient editing of multiple instances.

Impact of Batch- and Sequential-editing

  • A joint setting to understand how these two factors collaboratively influence, also echoing our previous observations.

Analysis of Causes of Side Effects

The side effects of editing come from changing the original model weights too much, resulting in overfitting to the editing facts.

Define the absolute value of the relative change in weight δ to characterize the degree of change of each element in ∆W.

  • ∆W might be quite sparse, while most elements in ∆W are minor: only 20% (10%) elements have δ greater than 0.077 (0.171).
  • Accumulation of overfitting across multiple edits can amplify changes to the original weights: the proportion of elements whose δ is greater than a certain threshold increases significantly as the number of edits increases.

  • Visualizing the weight change |∆W| as distinction between the final edited weight and the original unedited weight: consistent findings that the update weight ∆W might be quite sparse, while the accumulation across multiple edits can amplify changes.

RECT: RElative Change in weighT

A regularization method named RECT to discourage overly complex editing updates that are more likely to overfit. δ can be used to indicate the importance of each element in ∆W when inserting new editing facts.

  • Principal editing: top-k% elements in ∆W that change the most according to δ, keep their original values.
  • Minor editing: for the remaining elements in ∆W, they are treated as minor contributions to editing and set to zero for regularization.

Regularization Results

Editing Performance

  • RECT that keeps an appropriate top-40% elements in ∆W can help maintain over 94% majority of reliability and generalization, and even improve locality.
  • Setting excessive elements in ∆W to zero (top-20%) might hurt editing as partial important editing information is accidentally removed.
  • Compared with Random 40% and PCA 40%, RECT top-40% achieves the best, indicating its effectiveness in selecting the principal editing information.
  • RECT also exhibits advantages in efficiency, since it eliminates the complex calculations required in PCA.

General Downstream Task Performance

  • As the proportion of elements in ∆W set to 0 increases, the more editing overfitting is regularized, the smaller the change to the original weight, so the general abilities can be more preserved compared with unregularized ones in most tasks.
  • It still poses a challenge for some tasks such as sentiment analysis, and remains unclear whether it works for larger number of edits, which will be left to future work.

Takeaway

  • Existing editing algorithms can cause perturbations to LLMs.
  • How to mitigate the side effects of editing to prevent harm to general abilities of LLMs? Regularization to the rescue.
  • Model editing is a good solution for upgrading models in terms of specificity and cost, but is currently unstable and uncontrollable which is calling for more investigation.

BibTeX

@inproceedings{gu-etal-2024-model,
 title = "Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue",
 author = "Gu, Jia-Chen  and
           Xu, Hao-Xiang  and
           Ma, Jun-Yu  and
           Lu, Pan  and
           Ling, Zhen-Hua  and
           Chang, Kai-Wei  and
           Peng, Nanyun",
 booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
 year = "2024",
 publisher = "Association for Computational Linguistics"
}