Datasets:
ArXiv:
License:
license: cc-by-nc-sa-4.0 | |
<div align="center"> | |
**Editing Conceptual Knowledge for Large Language Models** | |
--- | |
<p align="center"> | |
<a href="#-conceptual-knowledge-editing">Overview</a> • | |
<a href="#-usage">How To Use</a> • | |
<a href="#-citation">Citation</a> • | |
<a href="https://arxiv.org/abs/2403.06259">Paper</a> • | |
<a href="https://zjunlp.github.io/project/ConceptEdit">Website</a> | |
</p> | |
</div> | |
## 💡 Conceptual Knowledge Editing | |
<div align=center> | |
<img src="./flow1.gif" width="70%" height="70%" /> | |
</div> | |
### Task Definition | |
**Concept** is a generalization of the world in the process of cognition, which represents the shared features and essential characteristics of a class of entities. | |
Therefore, the endeavor of concept editing aims to modify the definition of concepts, thereby altering the behavior of LLMs when processing these concepts. | |
### Evaluation | |
To analyze conceptual knowledge modification, we adopt the metrics for factual editing (the target is the concept $C$ rather than factual instance $t$). | |
- `Reliability`: the success rate of editing with a given editing description | |
- `Generalization`: the success rate of editing **within** the editing scope | |
- `Locality`: whether the model's output changes after editing for unrelated inputs | |
Concept Specific Evaluation Metrics | |
- `Instance Change`: capturing the intricacies of these instance-level changes | |
- `Concept Consistency`: the semantic similarity of generated concept definition | |
## 🌟 Usage | |
### 🎍 Current Implementation | |
As the main Table of our paper, four editing methods are supported for conceptual knowledge editing. | |
| **Method** | GPT-2 | GPT-J | LlaMA2-13B-Chat | Mistral-7B-v0.1 | |
| :--------------: | :--------------: | :--------------: | :--------------: | :--------------: | | |
| FT | ✅ | ✅ | ✅ | ✅ | | |
| ROME | ✅ | ✅ |✅ | ✅ | | |
| MEMIT | ✅ | ✅ | ✅| ✅ | | |
| PROMPT | ✅ | ✅ | ✅ | ✅ | | |
### 💻 Run | |
You can follow [EasyEdit](https://github.com/zjunlp/EasyEdit/edit/main/examples/ConceptEdit.md) to run the experiments. | |
## 📖 Citation | |
Please cite our paper if you use **ConceptEdit** in your work. | |
```bibtex | |
@misc{wang2024editing, | |
title={Editing Conceptual Knowledge for Large Language Models}, | |
author={Xiaohan Wang and Shengyu Mao and Ningyu Zhang and Shumin Deng and Yunzhi Yao and Yue Shen and Lei Liang and Jinjie Gu and Huajun Chen}, | |
year={2024}, | |
eprint={2403.06259}, | |
archivePrefix={arXiv}, | |
primaryClass={cs.CL} | |
} | |
``` | |
## 🎉 Acknowledgement | |
We would like to express our sincere gratitude to [DBpedia](https://www.dbpedia.org/resources/ontology/),[Wikidata](https://www.wikidata.org/wiki/Wikidata:Introduction),[OntoProbe-PLMs](https://github.com/vickywu1022/OntoProbe-PLMs) and [ROME](https://github.com/kmeng01/rome). | |
Their contributions are invaluable to the advancement of our work. | |