NLPmonster's picture
layoutlmv3-for-complete-receipt-understanding
e4f85c9 verified
|
raw
history blame
5.38 kB
---
library_name: transformers
license: cc-by-nc-sa-4.0
base_model: NLPmonster/layoutlmv3-for-receipt-understanding
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: layoutlmv3-for-complete-receipt-understanding
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv3-for-complete-receipt-understanding
This model is a fine-tuned version of [NLPmonster/layoutlmv3-for-receipt-understanding](https://huggingface.co/NLPmonster/layoutlmv3-for-receipt-understanding) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4673
- Precision: 0.8401
- Recall: 0.8399
- F1: 0.8400
- Accuracy: 0.8784
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 5
- eval_batch_size: 5
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 1.0756 | 0.4425 | 50 | 0.5379 | 0.7401 | 0.7577 | 0.7488 | 0.8092 |
| 0.5502 | 0.8850 | 100 | 0.4509 | 0.7628 | 0.8035 | 0.7827 | 0.8354 |
| 0.4459 | 1.3274 | 150 | 0.4267 | 0.7667 | 0.8307 | 0.7974 | 0.8461 |
| 0.4209 | 1.7699 | 200 | 0.4030 | 0.7837 | 0.8130 | 0.7981 | 0.8476 |
| 0.3973 | 2.2124 | 250 | 0.3828 | 0.7930 | 0.8222 | 0.8073 | 0.8545 |
| 0.3421 | 2.6549 | 300 | 0.3754 | 0.8199 | 0.8060 | 0.8129 | 0.8618 |
| 0.3529 | 3.0973 | 350 | 0.3780 | 0.7888 | 0.8464 | 0.8166 | 0.8585 |
| 0.2961 | 3.5398 | 400 | 0.4031 | 0.7724 | 0.8512 | 0.8099 | 0.8493 |
| 0.3119 | 3.9823 | 450 | 0.3564 | 0.8111 | 0.8424 | 0.8265 | 0.8676 |
| 0.2629 | 4.4248 | 500 | 0.3746 | 0.7991 | 0.8427 | 0.8203 | 0.8649 |
| 0.2684 | 4.8673 | 550 | 0.3764 | 0.8198 | 0.8028 | 0.8112 | 0.8611 |
| 0.2433 | 5.3097 | 600 | 0.3752 | 0.8225 | 0.8330 | 0.8277 | 0.8684 |
| 0.2289 | 5.7522 | 650 | 0.3966 | 0.7908 | 0.8377 | 0.8136 | 0.8561 |
| 0.2141 | 6.1947 | 700 | 0.3870 | 0.8251 | 0.8175 | 0.8213 | 0.8645 |
| 0.2072 | 6.6372 | 750 | 0.3782 | 0.8129 | 0.8427 | 0.8275 | 0.8694 |
| 0.2101 | 7.0796 | 800 | 0.3758 | 0.8311 | 0.8379 | 0.8345 | 0.8743 |
| 0.1848 | 7.5221 | 850 | 0.3959 | 0.8063 | 0.8342 | 0.8200 | 0.8638 |
| 0.1787 | 7.9646 | 900 | 0.4088 | 0.8127 | 0.8360 | 0.8241 | 0.8634 |
| 0.1563 | 8.4071 | 950 | 0.4146 | 0.8068 | 0.8222 | 0.8144 | 0.8598 |
| 0.1617 | 8.8496 | 1000 | 0.3919 | 0.8220 | 0.8360 | 0.8289 | 0.8714 |
| 0.1498 | 9.2920 | 1050 | 0.4222 | 0.8149 | 0.8222 | 0.8186 | 0.8625 |
| 0.1422 | 9.7345 | 1100 | 0.4104 | 0.8188 | 0.8402 | 0.8293 | 0.8699 |
| 0.1341 | 10.1770 | 1150 | 0.4207 | 0.8370 | 0.8115 | 0.8241 | 0.8701 |
| 0.1311 | 10.6195 | 1200 | 0.4277 | 0.8401 | 0.8135 | 0.8266 | 0.8710 |
| 0.1239 | 11.0619 | 1250 | 0.4153 | 0.8368 | 0.8222 | 0.8295 | 0.8729 |
| 0.1139 | 11.5044 | 1300 | 0.4330 | 0.8272 | 0.8379 | 0.8325 | 0.8721 |
| 0.1126 | 11.9469 | 1350 | 0.4389 | 0.8393 | 0.8295 | 0.8344 | 0.8739 |
| 0.0983 | 12.3894 | 1400 | 0.4601 | 0.8362 | 0.8148 | 0.8254 | 0.8679 |
| 0.1027 | 12.8319 | 1450 | 0.4431 | 0.8369 | 0.8280 | 0.8324 | 0.8732 |
| 0.0944 | 13.2743 | 1500 | 0.4557 | 0.8253 | 0.8422 | 0.8337 | 0.8717 |
| 0.0866 | 13.7168 | 1550 | 0.4566 | 0.8333 | 0.8312 | 0.8323 | 0.8734 |
| 0.0872 | 14.1593 | 1600 | 0.4609 | 0.8390 | 0.8312 | 0.8351 | 0.8760 |
| 0.079 | 14.6018 | 1650 | 0.4522 | 0.8349 | 0.8357 | 0.8353 | 0.8765 |
| 0.0793 | 15.0442 | 1700 | 0.4590 | 0.8263 | 0.8447 | 0.8354 | 0.8740 |
| 0.0738 | 15.4867 | 1750 | 0.4606 | 0.8373 | 0.8275 | 0.8324 | 0.8751 |
| 0.0704 | 15.9292 | 1800 | 0.4553 | 0.8454 | 0.8369 | 0.8411 | 0.8812 |
| 0.0642 | 16.3717 | 1850 | 0.4724 | 0.8339 | 0.8424 | 0.8381 | 0.8766 |
| 0.0647 | 16.8142 | 1900 | 0.4670 | 0.8429 | 0.8417 | 0.8423 | 0.8812 |
| 0.0624 | 17.2566 | 1950 | 0.4647 | 0.8410 | 0.8402 | 0.8406 | 0.8792 |
| 0.0593 | 17.6991 | 2000 | 0.4673 | 0.8401 | 0.8399 | 0.8400 | 0.8784 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1