Datasets:

ArXiv:
License:
File size: 2,904 Bytes
0928e34
 
 
983c94d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
license: cc-by-nc-sa-4.0
---


<div align="center">

**Editing Conceptual Knowledge for Large Language Models**

---

<p align="center">
  <a href="#-conceptual-knowledge-editing">Overview</a><a href="#-usage">How To Use</a>    <a href="#-citation">Citation</a> •
    <a href="https://arxiv.org/abs/2403.06259">Paper</a> •
    <a href="https://zjunlp.github.io/project/ConceptEdit">Website</a> 
</p>
</div>


## 💡 Conceptual Knowledge Editing

<div align=center>
<img src="./flow1.gif" width="70%" height="70%" />
</div>

### Task Definition

**Concept** is a generalization of the world in the process of cognition, which represents the shared features and essential characteristics of a class of entities.
Therefore, the endeavor of concept editing aims to modify the definition of concepts, thereby altering the behavior of LLMs when processing these concepts.


### Evaluation

To analyze conceptual knowledge modification, we adopt the  metrics for factual editing (the target is the concept $C$ rather than factual instance $t$).

- `Reliability`: the success rate of editing with a given editing description
- `Generalization`: the success rate of editing **within** the editing scope
- `Locality`: whether the model's output changes after editing for unrelated inputs


Concept Specific Evaluation Metrics

- `Instance Change`: capturing the intricacies of these instance-level changes
- `Concept Consistency`: the semantic similarity of generated concept definition


## 🌟 Usage

### 🎍 Current Implementation
As the main Table of our paper, four editing methods are supported for conceptual knowledge editing.
| **Method** |  GPT-2 | GPT-J | LlaMA2-13B-Chat | Mistral-7B-v0.1
| :--------------: | :--------------: | :--------------: | :--------------: | :--------------: | 
| FT | ✅ | ✅ | ✅ | ✅ | 
| ROME | ✅ | ✅ |✅ | ✅ | 
| MEMIT | ✅ | ✅ | ✅| ✅ | 
| PROMPT | ✅ | ✅ | ✅ | ✅ |


### 💻 Run
 You can follow [EasyEdit](https://github.com/zjunlp/EasyEdit/edit/main/examples/ConceptEdit.md) to run the experiments.


## 📖 Citation

Please cite our paper if you use **ConceptEdit** in your work.

```bibtex
@misc{wang2024editing,
      title={Editing Conceptual Knowledge for Large Language Models}, 
      author={Xiaohan Wang and Shengyu Mao and Ningyu Zhang and Shumin Deng and Yunzhi Yao and Yue Shen and Lei Liang and Jinjie Gu and Huajun Chen},
      year={2024},
      eprint={2403.06259},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```

## 🎉 Acknowledgement

We would like to express our sincere gratitude to [DBpedia](https://www.dbpedia.org/resources/ontology/),[Wikidata](https://www.wikidata.org/wiki/Wikidata:Introduction),[OntoProbe-PLMs](https://github.com/vickywu1022/OntoProbe-PLMs) and [ROME](https://github.com/kmeng01/rome).

Their contributions are invaluable to the advancement of our work.