Roberta-Base-CoNLL2003
This model is a fine-tuned version of roberta-base on the conll2003 dataset.
Model Usage
We made and used the original tokenizer with BPE-Dropout. So, you can't use AutoTokenizer but if subword normalization is not used, original RobertaTokenizer can be substituted.
Example and Tokenizer Repository: github
from transformers import RobertaTokenizer, AutoModelForTokenClassification
from transformers import pipeline
tokenizer = RobertaTokenizer.from_pretrained("4ldk/Roberta-Base-CoNLL2003")
model = AutoModelForTokenClassification.from_pretrained("4ldk/Roberta-Base-CoNLL2003")
nlp = pipeline("ner", model=model, tokenizer=tokenizer, grouped_entities=True)
example = "My name is Philipp and live in Germany"
nlp(example)
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-5
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: AdamW with betas=(0.9,0.999), epsilon=1e-08, and weight decay=0.01
- lr_scheduler_type: linear with warmup rate = 0.1
- num_epochs: 20
- subword regularization p = 0.0 (= trained without subword regularization)
And we add the sentences following the input sentence in the original dataset. Therefore, it cannot be reproduced from the dataset published on huggingface.
Training results
CoNNL2003
It achieves the following results on the evaluation set:
- Precision: 0.9707
- Recall: 0.9636
- F1: 0.9671
It achieves the following results on the test set:
- Precision: 0.9352
- Recall: 0.9218
- F1: 0.9285
CoNNLpp(2023)
Do CoNLL-2003 Named Entity Taggers Still Work Well in 2023) (github)
- Precision: 0.9244
- Recall: 0.9225
- F1: 0.9235
CoNLLpp(CrossWeigh)
CrossWeigh: Training Named Entity Tagger from Imperfect Annotations (github)
- Precision: 0.9449
- Recall: 0.9403
- F1: 0.9426
Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu117
- Downloads last month
- 4