NLPmonster commited on
Commit
e4f85c9
·
verified ·
1 Parent(s): 1d00af8

layoutlmv3-for-complete-receipt-understanding

Browse files
README.md CHANGED
@@ -21,11 +21,11 @@ should probably proofread and complete it, then remove this comment. -->
21
 
22
  This model is a fine-tuned version of [NLPmonster/layoutlmv3-for-receipt-understanding](https://huggingface.co/NLPmonster/layoutlmv3-for-receipt-understanding) on an unknown dataset.
23
  It achieves the following results on the evaluation set:
24
- - Loss: 0.5246
25
- - Precision: 0.7795
26
- - Recall: 0.7867
27
- - F1: 0.7831
28
- - Accuracy: 0.8572
29
 
30
  ## Model description
31
 
@@ -50,32 +50,52 @@ The following hyperparameters were used during training:
50
  - seed: 42
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
- - training_steps: 1000
54
 
55
  ### Training results
56
 
57
- | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
58
- |:-------------:|:------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
59
- | 0.2792 | 0.4425 | 50 | 0.5236 | 0.7608 | 0.7535 | 0.7571 | 0.8398 |
60
- | 0.3022 | 0.8850 | 100 | 0.5262 | 0.7709 | 0.7541 | 0.7624 | 0.8420 |
61
- | 0.2821 | 1.3274 | 150 | 0.5263 | 0.7704 | 0.7616 | 0.7660 | 0.8403 |
62
- | 0.2801 | 1.7699 | 200 | 0.5310 | 0.7645 | 0.7659 | 0.7652 | 0.8412 |
63
- | 0.2545 | 2.2124 | 250 | 0.5425 | 0.7606 | 0.7685 | 0.7645 | 0.8416 |
64
- | 0.2453 | 2.6549 | 300 | 0.5237 | 0.7624 | 0.7602 | 0.7613 | 0.8417 |
65
- | 0.2464 | 3.0973 | 350 | 0.5169 | 0.7699 | 0.7721 | 0.7710 | 0.8459 |
66
- | 0.2248 | 3.5398 | 400 | 0.5266 | 0.7666 | 0.7701 | 0.7683 | 0.8447 |
67
- | 0.2117 | 3.9823 | 450 | 0.5041 | 0.7754 | 0.7751 | 0.7753 | 0.8496 |
68
- | 0.1986 | 4.4248 | 500 | 0.5327 | 0.7673 | 0.7729 | 0.7701 | 0.8453 |
69
- | 0.1832 | 4.8673 | 550 | 0.5462 | 0.7658 | 0.7606 | 0.7632 | 0.8423 |
70
- | 0.1752 | 5.3097 | 600 | 0.5207 | 0.7738 | 0.7830 | 0.7783 | 0.8519 |
71
- | 0.1698 | 5.7522 | 650 | 0.5247 | 0.7737 | 0.7763 | 0.7750 | 0.8514 |
72
- | 0.1495 | 6.1947 | 700 | 0.5433 | 0.7702 | 0.7754 | 0.7727 | 0.8495 |
73
- | 0.1487 | 6.6372 | 750 | 0.5363 | 0.7731 | 0.7784 | 0.7757 | 0.8505 |
74
- | 0.1431 | 7.0796 | 800 | 0.5276 | 0.7792 | 0.7754 | 0.7773 | 0.8544 |
75
- | 0.1283 | 7.5221 | 850 | 0.5344 | 0.7752 | 0.7816 | 0.7784 | 0.8536 |
76
- | 0.1253 | 7.9646 | 900 | 0.5166 | 0.7887 | 0.7834 | 0.7861 | 0.8594 |
77
- | 0.1176 | 8.4071 | 950 | 0.5261 | 0.7794 | 0.7834 | 0.7814 | 0.8571 |
78
- | 0.1124 | 8.8496 | 1000 | 0.5246 | 0.7795 | 0.7867 | 0.7831 | 0.8572 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
79
 
80
 
81
  ### Framework versions
 
21
 
22
  This model is a fine-tuned version of [NLPmonster/layoutlmv3-for-receipt-understanding](https://huggingface.co/NLPmonster/layoutlmv3-for-receipt-understanding) on an unknown dataset.
23
  It achieves the following results on the evaluation set:
24
+ - Loss: 0.4673
25
+ - Precision: 0.8401
26
+ - Recall: 0.8399
27
+ - F1: 0.8400
28
+ - Accuracy: 0.8784
29
 
30
  ## Model description
31
 
 
50
  - seed: 42
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
+ - training_steps: 2000
54
 
55
  ### Training results
56
 
57
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
58
+ |:-------------:|:-------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
59
+ | 1.0756 | 0.4425 | 50 | 0.5379 | 0.7401 | 0.7577 | 0.7488 | 0.8092 |
60
+ | 0.5502 | 0.8850 | 100 | 0.4509 | 0.7628 | 0.8035 | 0.7827 | 0.8354 |
61
+ | 0.4459 | 1.3274 | 150 | 0.4267 | 0.7667 | 0.8307 | 0.7974 | 0.8461 |
62
+ | 0.4209 | 1.7699 | 200 | 0.4030 | 0.7837 | 0.8130 | 0.7981 | 0.8476 |
63
+ | 0.3973 | 2.2124 | 250 | 0.3828 | 0.7930 | 0.8222 | 0.8073 | 0.8545 |
64
+ | 0.3421 | 2.6549 | 300 | 0.3754 | 0.8199 | 0.8060 | 0.8129 | 0.8618 |
65
+ | 0.3529 | 3.0973 | 350 | 0.3780 | 0.7888 | 0.8464 | 0.8166 | 0.8585 |
66
+ | 0.2961 | 3.5398 | 400 | 0.4031 | 0.7724 | 0.8512 | 0.8099 | 0.8493 |
67
+ | 0.3119 | 3.9823 | 450 | 0.3564 | 0.8111 | 0.8424 | 0.8265 | 0.8676 |
68
+ | 0.2629 | 4.4248 | 500 | 0.3746 | 0.7991 | 0.8427 | 0.8203 | 0.8649 |
69
+ | 0.2684 | 4.8673 | 550 | 0.3764 | 0.8198 | 0.8028 | 0.8112 | 0.8611 |
70
+ | 0.2433 | 5.3097 | 600 | 0.3752 | 0.8225 | 0.8330 | 0.8277 | 0.8684 |
71
+ | 0.2289 | 5.7522 | 650 | 0.3966 | 0.7908 | 0.8377 | 0.8136 | 0.8561 |
72
+ | 0.2141 | 6.1947 | 700 | 0.3870 | 0.8251 | 0.8175 | 0.8213 | 0.8645 |
73
+ | 0.2072 | 6.6372 | 750 | 0.3782 | 0.8129 | 0.8427 | 0.8275 | 0.8694 |
74
+ | 0.2101 | 7.0796 | 800 | 0.3758 | 0.8311 | 0.8379 | 0.8345 | 0.8743 |
75
+ | 0.1848 | 7.5221 | 850 | 0.3959 | 0.8063 | 0.8342 | 0.8200 | 0.8638 |
76
+ | 0.1787 | 7.9646 | 900 | 0.4088 | 0.8127 | 0.8360 | 0.8241 | 0.8634 |
77
+ | 0.1563 | 8.4071 | 950 | 0.4146 | 0.8068 | 0.8222 | 0.8144 | 0.8598 |
78
+ | 0.1617 | 8.8496 | 1000 | 0.3919 | 0.8220 | 0.8360 | 0.8289 | 0.8714 |
79
+ | 0.1498 | 9.2920 | 1050 | 0.4222 | 0.8149 | 0.8222 | 0.8186 | 0.8625 |
80
+ | 0.1422 | 9.7345 | 1100 | 0.4104 | 0.8188 | 0.8402 | 0.8293 | 0.8699 |
81
+ | 0.1341 | 10.1770 | 1150 | 0.4207 | 0.8370 | 0.8115 | 0.8241 | 0.8701 |
82
+ | 0.1311 | 10.6195 | 1200 | 0.4277 | 0.8401 | 0.8135 | 0.8266 | 0.8710 |
83
+ | 0.1239 | 11.0619 | 1250 | 0.4153 | 0.8368 | 0.8222 | 0.8295 | 0.8729 |
84
+ | 0.1139 | 11.5044 | 1300 | 0.4330 | 0.8272 | 0.8379 | 0.8325 | 0.8721 |
85
+ | 0.1126 | 11.9469 | 1350 | 0.4389 | 0.8393 | 0.8295 | 0.8344 | 0.8739 |
86
+ | 0.0983 | 12.3894 | 1400 | 0.4601 | 0.8362 | 0.8148 | 0.8254 | 0.8679 |
87
+ | 0.1027 | 12.8319 | 1450 | 0.4431 | 0.8369 | 0.8280 | 0.8324 | 0.8732 |
88
+ | 0.0944 | 13.2743 | 1500 | 0.4557 | 0.8253 | 0.8422 | 0.8337 | 0.8717 |
89
+ | 0.0866 | 13.7168 | 1550 | 0.4566 | 0.8333 | 0.8312 | 0.8323 | 0.8734 |
90
+ | 0.0872 | 14.1593 | 1600 | 0.4609 | 0.8390 | 0.8312 | 0.8351 | 0.8760 |
91
+ | 0.079 | 14.6018 | 1650 | 0.4522 | 0.8349 | 0.8357 | 0.8353 | 0.8765 |
92
+ | 0.0793 | 15.0442 | 1700 | 0.4590 | 0.8263 | 0.8447 | 0.8354 | 0.8740 |
93
+ | 0.0738 | 15.4867 | 1750 | 0.4606 | 0.8373 | 0.8275 | 0.8324 | 0.8751 |
94
+ | 0.0704 | 15.9292 | 1800 | 0.4553 | 0.8454 | 0.8369 | 0.8411 | 0.8812 |
95
+ | 0.0642 | 16.3717 | 1850 | 0.4724 | 0.8339 | 0.8424 | 0.8381 | 0.8766 |
96
+ | 0.0647 | 16.8142 | 1900 | 0.4670 | 0.8429 | 0.8417 | 0.8423 | 0.8812 |
97
+ | 0.0624 | 17.2566 | 1950 | 0.4647 | 0.8410 | 0.8402 | 0.8406 | 0.8792 |
98
+ | 0.0593 | 17.6991 | 2000 | 0.4673 | 0.8401 | 0.8399 | 0.8400 | 0.8784 |
99
 
100
 
101
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cc4d57250d3535fbf00c0ae58d5c7ed6ff610e3cce0ecb0996eb74ba6fb7050a
3
  size 503825792
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:77dfcc81c3157c0fb2d3bc4ca32a0b3fed970527df1c9aec9afda6186d5edf64
3
  size 503825792
runs/Oct17_15-47-04_863b3737c4c8/events.out.tfevents.1729180030.863b3737c4c8.381.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6e632d0fc23f9c09592003f1831657d16f34d9d240f71e0ee935f1e3f8d088eb
3
+ size 35524
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c8a5fcf7412d05277926bd3f4c07ac9f9739bcaebcfb2d0f99309a28363779e2
3
  size 5240
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6dff5a8ed90335d1acaa3f349c6b57da7c9d32efabbfb67780a319f7dc23db8f
3
  size 5240