metadata
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-in21k-finetuned-lf-invalidation
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.951063829787234
vit-base-patch16-224-in21k-finetuned-lf-invalidation
This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the imagefolder dataset. It achieves the following results on the evaluation set:
- Loss: 0.1798
- Accuracy: 0.9511
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
0.6773 | 0.9796 | 12 | 0.6550 | 0.5681 |
0.5982 | 1.9592 | 24 | 0.5839 | 0.6362 |
0.479 | 2.9388 | 36 | 0.4356 | 0.8894 |
0.3862 | 4.0 | 49 | 0.2807 | 0.9362 |
0.2498 | 4.9796 | 61 | 0.2599 | 0.9128 |
0.2836 | 5.9592 | 73 | 0.5015 | 0.7745 |
0.2641 | 6.9388 | 85 | 0.5500 | 0.7340 |
0.2716 | 8.0 | 98 | 0.3083 | 0.8787 |
0.2382 | 8.9796 | 110 | 0.2885 | 0.8936 |
0.1985 | 9.9592 | 122 | 0.1798 | 0.9511 |
0.2174 | 10.9388 | 134 | 0.3060 | 0.8766 |
0.2372 | 12.0 | 147 | 0.3084 | 0.8702 |
0.2164 | 12.9796 | 159 | 0.2667 | 0.9021 |
0.2106 | 13.9592 | 171 | 0.3747 | 0.8447 |
0.1956 | 14.9388 | 183 | 0.5105 | 0.7851 |
0.2154 | 16.0 | 196 | 0.5683 | 0.7787 |
0.179 | 16.9796 | 208 | 0.4279 | 0.8340 |
0.2548 | 17.9592 | 220 | 0.6493 | 0.7404 |
0.236 | 18.9388 | 232 | 0.3860 | 0.8340 |
0.2121 | 20.0 | 245 | 0.5826 | 0.7766 |
0.1691 | 20.9796 | 257 | 0.3195 | 0.8638 |
0.1824 | 21.9592 | 269 | 0.3772 | 0.8404 |
0.1733 | 22.9388 | 281 | 0.5182 | 0.7936 |
0.1837 | 24.0 | 294 | 0.4924 | 0.8149 |
0.1274 | 24.9796 | 306 | 0.3895 | 0.8447 |
0.1415 | 25.9592 | 318 | 0.3662 | 0.8532 |
0.186 | 26.9388 | 330 | 0.4347 | 0.8447 |
0.1403 | 28.0 | 343 | 0.4490 | 0.8383 |
0.1635 | 28.9796 | 355 | 0.7771 | 0.7085 |
0.2135 | 29.9592 | 367 | 0.3503 | 0.8702 |
0.1456 | 30.9388 | 379 | 0.3815 | 0.8617 |
0.1634 | 32.0 | 392 | 0.2810 | 0.9 |
0.1308 | 32.9796 | 404 | 0.4643 | 0.8383 |
0.163 | 33.9592 | 416 | 0.3337 | 0.8787 |
0.1736 | 34.9388 | 428 | 0.4070 | 0.8553 |
0.1638 | 36.0 | 441 | 0.4142 | 0.8574 |
0.1488 | 36.9796 | 453 | 0.5039 | 0.8170 |
0.148 | 37.9592 | 465 | 0.5767 | 0.7745 |
0.1741 | 38.9388 | 477 | 0.4842 | 0.8255 |
0.1338 | 40.0 | 490 | 0.7236 | 0.7234 |
0.1302 | 40.9796 | 502 | 0.5295 | 0.8043 |
0.141 | 41.9592 | 514 | 0.5294 | 0.8085 |
0.1461 | 42.9388 | 526 | 0.5485 | 0.7979 |
0.1006 | 44.0 | 539 | 0.5453 | 0.7915 |
0.1317 | 44.9796 | 551 | 0.5930 | 0.7681 |
0.1069 | 45.9592 | 563 | 0.4976 | 0.8170 |
0.1531 | 46.9388 | 575 | 0.5105 | 0.8064 |
0.155 | 48.0 | 588 | 0.6128 | 0.7638 |
0.1237 | 48.9796 | 600 | 0.6180 | 0.7617 |
Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.2.0
- Tokenizers 0.19.1