***** Test results ***** Thu Sep 22 06:41:21 2022 Task: ner Model path: bert-base-uncased Data path: ./data/ud/ Tokenizer: bert-base-uncased Batch size: 32 Epoch: 6 Learning rate: 2e-05 LR Decay End Factor: 0.3LR Decay End Epoch: 5Sequence length: 96 Training: True Num Threads: 24 Num Sentences: 0 Max Grad Norm: 0.0 Use GNN: False Syntax graph style: dep Use label weights: False Clip value: 50 precision recall f1-score support CARDINAL 0.7133 0.6503 0.6803 612 DATE 0.6922 0.7254 0.7084 1045 EVENT 0.4429 0.3875 0.4133 80 FAC 0.3390 0.3974 0.3659 151 GPE 0.8456 0.8714 0.8583 1936 LANGUAGE 0.5135 0.2468 0.3333 77 LAW 0.4130 0.3333 0.3689 57 LOC 0.5934 0.4977 0.5414 217 MONEY 0.5370 0.4754 0.5043 61 NORP 0.6211 0.7536 0.6809 422 ORDINAL 0.8208 0.8304 0.8256 171 ORG 0.5289 0.5869 0.5564 857 PERCENT 0.3333 0.4722 0.3908 36 PERSON 0.7192 0.7885 0.7523 1371 PRODUCT 0.2705 0.3367 0.3000 98 QUANTITY 0.3485 0.4340 0.3866 53 SEP] 0.0000 0.0000 0.0000 0 TIME 0.6071 0.6355 0.6210 214 WORK_OF_ART 0.3000 0.2538 0.2750 130 micro avg 0.6487 0.7110 0.6784 7588 macro avg 0.5073 0.5093 0.5033 7588 weighted avg 0.6821 0.7110 0.6946 7588 Special token predictions: 0