Configuration Parsing
Warning:
In adapter_config.json: "peft.task_type" must be a string
vit-base-patch16-224-in21k-lora
This model is a fine-tuned version of google/vit-base-patch16-224-in21k on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.3377
- Accuracy: 0.9013
- Pca Pca Loss: 0.7490
- Pca Pca Accuracy: 0.8261
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Pca Loss | Pca Accuracy |
---|---|---|---|---|---|---|
1.0586 | 0.9923 | 97 | 0.5664 | 0.8731 | 1.1687 | 0.8079 |
0.8991 | 1.9949 | 195 | 0.4300 | 0.8861 | 0.9371 | 0.8158 |
0.8897 | 2.9974 | 293 | 0.3968 | 0.8924 | 0.8579 | 0.8205 |
0.8447 | 4.0 | 391 | 0.3713 | 0.8974 | 0.8137 | 0.823 |
0.6856 | 4.9923 | 488 | 0.3585 | 0.8964 | 0.7872 | 0.8236 |
0.8925 | 5.9949 | 586 | 0.3513 | 0.8998 | 0.7704 | 0.8252 |
0.8224 | 6.9974 | 684 | 0.3455 | 0.9009 | 0.7607 | 0.8242 |
0.8563 | 8.0 | 782 | 0.3417 | 0.9019 | 0.7535 | 0.8257 |
0.8198 | 8.9923 | 879 | 0.3388 | 0.9023 | 0.7501 | 0.827 |
0.7705 | 9.9233 | 970 | 0.3377 | 0.9013 | 0.7490 | 0.8261 |
Framework versions
- PEFT 0.13.0
- Transformers 4.45.1
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
- Downloads last month
- 279
Model tree for sajjadi/vit-base-patch16-224-in21k-lora
Base model
google/vit-base-patch16-224-in21k