model update
Browse files
README.md
CHANGED
@@ -159,20 +159,20 @@ RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on
|
|
159 |
[relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity).
|
160 |
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
|
161 |
It achieves the following results on the relation understanding tasks:
|
162 |
-
- Analogy Question ([full result](https://huggingface.co/relbert/relbert-roberta-large-semeval2012-average-prompt-a-nce/raw/main/analogy.json)):
|
163 |
- Accuracy on SAT (full): 0.6925133689839572
|
164 |
- Accuracy on SAT: 0.6913946587537092
|
165 |
- Accuracy on BATS: 0.7776542523624236
|
166 |
- Accuracy on U2: 0.6535087719298246
|
167 |
- Accuracy on U4: 0.6666666666666666
|
168 |
- Accuracy on Google: 0.936
|
169 |
-
- Lexical Relation Classification ([full result](https://huggingface.co/relbert/relbert-roberta-large-semeval2012-average-prompt-a-nce/raw/main/classification.json))):
|
170 |
- Micro F1 score on BLESS: 0.9153231881874341
|
171 |
- Micro F1 score on CogALexV: 0.8509389671361502
|
172 |
- Micro F1 score on EVALution: 0.6771397616468039
|
173 |
- Micro F1 score on K&H+N: 0.9575015649996522
|
174 |
- Micro F1 score on ROOT09: 0.9025383892196804
|
175 |
-
- Relation Mapping ([full result](https://huggingface.co/relbert/relbert-roberta-large-semeval2012-average-prompt-a-nce/raw/main/relation_mapping.json)):
|
176 |
- Accuracy on Relation Mapping: 92.14285714285714
|
177 |
|
178 |
|
@@ -214,7 +214,7 @@ The following hyperparameters were used during training:
|
|
214 |
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-large-semeval2012-average-prompt-a-nce/raw/main/trainer_config.json).
|
215 |
|
216 |
### Reference
|
217 |
-
If you use any resource from
|
218 |
|
219 |
```
|
220 |
|
|
|
159 |
[relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity).
|
160 |
Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail).
|
161 |
It achieves the following results on the relation understanding tasks:
|
162 |
+
- Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/relbert-roberta-large-semeval2012-average-prompt-a-nce/raw/main/analogy.json)):
|
163 |
- Accuracy on SAT (full): 0.6925133689839572
|
164 |
- Accuracy on SAT: 0.6913946587537092
|
165 |
- Accuracy on BATS: 0.7776542523624236
|
166 |
- Accuracy on U2: 0.6535087719298246
|
167 |
- Accuracy on U4: 0.6666666666666666
|
168 |
- Accuracy on Google: 0.936
|
169 |
+
- Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/relbert-roberta-large-semeval2012-average-prompt-a-nce/raw/main/classification.json))):
|
170 |
- Micro F1 score on BLESS: 0.9153231881874341
|
171 |
- Micro F1 score on CogALexV: 0.8509389671361502
|
172 |
- Micro F1 score on EVALution: 0.6771397616468039
|
173 |
- Micro F1 score on K&H+N: 0.9575015649996522
|
174 |
- Micro F1 score on ROOT09: 0.9025383892196804
|
175 |
+
- Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/relbert-roberta-large-semeval2012-average-prompt-a-nce/raw/main/relation_mapping.json)):
|
176 |
- Accuracy on Relation Mapping: 92.14285714285714
|
177 |
|
178 |
|
|
|
214 |
The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/relbert-roberta-large-semeval2012-average-prompt-a-nce/raw/main/trainer_config.json).
|
215 |
|
216 |
### Reference
|
217 |
+
If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/).
|
218 |
|
219 |
```
|
220 |
|