4ldk's picture
Update README.md
058889b verified
|
raw
history blame
2.48 kB
---
license: apache-2.0
tags:
- token-classification
datasets:
- conll2003
- conllpp
language:
- en
metrics:
- f1: 92.85
- f1(valid): 96.71
- f1(CoNLLpp(2023)): 92.35
- f1(CoNLLpp(CrossWeigh)): 94.26
---
# Roberta-Base-CoNLL2003
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the conll2003 dataset.
## Model Usage
We made and used the original tokenizer with [BPE-Dropout](https://aclanthology.org/2020.acl-main.170/).
So, you can't use AutoTokenizer but if subword normalization is not used, original RobertaTokenizer can be substituted.
Example and Tokenizer Repository: [github](https://github.com/4ldk/CoNLL2003_Choices)
```python
from transformers import RobertaTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = RobertaTokenizer.from_pretrained("4ldk/Roberta-Base-CoNLL2003")
model = AutoModelForTokenClassification.from_pretrained("4ldk/Roberta-Base-CoNLL2003")
nlp = pipeline("ner", model=model, tokenizer=tokenizer, grouped_entities=True)
example = "My name is Philipp and live in Germany"
nlp(example)
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-5
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: AdamW with betas=(0.9,0.999), epsilon=1e-08, and weight decay=0.01
- lr_scheduler_type: linear with warmup rate = 0.1
- num_epochs: 20
- subword regularization p = 0.0 (= trained without subword regularization)
And we add the sentences following the input sentence in the original dataset. Therefore, it cannot be reproduced from the dataset published on huggingface.
### Training results
#### CoNNL2003
It achieves the following results on the evaluation set:
- Precision: 0.9707
- Recall: 0.9636
- F1: 0.9671
It achieves the following results on the test set:
- Precision: 0.9352
- Recall: 0.9218
- F1: 0.9285
#### CoNNLpp(2023)
[Do CoNLL-2003 Named Entity Taggers Still Work Well in 2023](
https://aclanthology.org/2023.acl-long.459.pdf)
([github](https://github.com/ShuhengL/acl2023_conllpp))
- Precision: 0.9244
- Recall: 0.9225
- F1: 0.9235
#### CoNLLpp(CrossWeigh)
[CrossWeigh: Training Named Entity Tagger from Imperfect Annotations](https://aclanthology.org/D19-1519/)
([github](https://github.com/ZihanWangKi/CrossWeigh))
- Precision: 0.9449
- Recall: 0.9403
- F1: 0.9426
### Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu117