Model editing is a technique that edits the large language models (LLMs) with updated knowledge to alleviate hallucinations without resource-intensive retraining. While current model editing methods can effectively modify a model’s behavior within a specific area of interest, they often overlook the potential unintended side effects on the general abilities of LLMs such as reasoning, natural language inference, and question answering. In this paper, we raise concerns that model editing’s improvements on factuality may come at the cost of a significant degradation of the model’s general abilities. We systematically analyze the side effects by evaluating four popular editing methods on three LLMs across eight representative tasks. Our extensive empirical experiments show that it is challenging for current editing methods to simultaneously improve factuality of LLMs and maintain their general abilities. Our analysis reveals that the side effects are caused by model editing altering the original model weights excessively, leading to overfitting to the edited facts. To mitigate this, a method named RECT is proposed to regularize the edit update weights by imposing constraints on their complexity based on the RElative Change in weighT. Evaluation results show that RECT can significantly mitigate the side effects of editing while still maintaining over 94% editing performance.
Model editing aims to efficiently change model behavior without affecting other inputs. The effectiveness is evaluated by:
Generalization: recall the fact under the in-scope paraphrase prompts. E.g., Who currently holds the office of President of the United States?
Locality: remain unchanged for the prompts out of the editing scope. E.g., Who is the president of France?
In this work, we ask:
Our research shows that LLMs are not robust to weight perturbations.
Single- vs. Sequential-editing: Make a single editing operation or multiple editing operations successively.
Instance- vs. Batch-editing: Use only one instance or multiple instances per editing operation.
Zero-shot performance of unedited and edited models on a variety of tasks.
Editing Methods: KN, MEND, ROME and MEMIT (only MEND and MEMIT support batch-editing).
Editing Dataset: Zero-Shot Relation Extraction (ZSRE).
Selected LLMs: GPT-2 XL (1.5B), LLaMA-1 (7B), and LLaMA-2 (7B).
Downstream Tasks, Datasets and Metrics:
The side effects of editing come from changing the original model weights too much, resulting in overfitting to the editing facts.
Define the absolute value of the relative change in weight δ to characterize the degree of change of each element in ∆W.
A regularization method named RECT to discourage overly complex editing updates that are more likely to overfit. δ can be used to indicate the importance of each element in ∆W when inserting new editing facts.
@inproceedings{gu-etal-2024-model,
title = "Model Editing Harms General Abilities of Large Language Models: Regularization to the Rescue",
author = "Gu, Jia-Chen and
Xu, Hao-Xiang and
Ma, Jun-Yu and
Lu, Pan and
Ling, Zhen-Hua and
Chang, Kai-Wei and
Peng, Nanyun",
booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
year = "2024",
publisher = "Association for Computational Linguistics"
}