sajjadi commited on
Commit
330fc6a
·
verified ·
1 Parent(s): dacc4a4

Model save

Browse files
Files changed (2) hide show
  1. README.md +2 -25
  2. adapter_model.safetensors +1 -1
README.md CHANGED
@@ -2,8 +2,6 @@
2
  base_model: google/vit-base-patch16-224-in21k
3
  library_name: peft
4
  license: apache-2.0
5
- metrics:
6
- - accuracy
7
  tags:
8
  - generated_from_trainer
9
  model-index:
@@ -14,16 +12,11 @@ model-index:
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/sajjadi/Fast-PEFT/runs/odh4f4wu)
18
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/sajjadi/Fast-PEFT/runs/odh4f4wu)
19
  # vit-base-patch16-224-in21k-lora
20
 
21
  This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
22
- It achieves the following results on the evaluation set:
23
- - Loss: 0.3331
24
- - Accuracy: 0.9056
25
- - Pca Pca Loss: 0.7451
26
- - Pca Pca Accuracy: 0.8259
27
 
28
  ## Model description
29
 
@@ -53,22 +46,6 @@ The following hyperparameters were used during training:
53
  - num_epochs: 10
54
  - mixed_precision_training: Native AMP
55
 
56
- ### Training results
57
-
58
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | Pca Loss | Pca Accuracy |
59
- |:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|:------------:|
60
- | 1.0483 | 0.9923 | 97 | 0.5685 | 0.873 | 1.1522 | 0.8107 |
61
- | 0.9044 | 1.9949 | 195 | 0.4295 | 0.8893 | 0.9273 | 0.818 |
62
- | 0.8868 | 2.9974 | 293 | 0.3884 | 0.8957 | 0.8502 | 0.8201 |
63
- | 0.8951 | 4.0 | 391 | 0.3675 | 0.8973 | 0.8084 | 0.8234 |
64
- | 0.6836 | 4.9923 | 488 | 0.3538 | 0.8987 | 0.7831 | 0.824 |
65
- | 0.9149 | 5.9949 | 586 | 0.3447 | 0.9013 | 0.7664 | 0.8239 |
66
- | 0.8324 | 6.9974 | 684 | 0.3408 | 0.9025 | 0.7561 | 0.8254 |
67
- | 0.8634 | 8.0 | 782 | 0.3368 | 0.9041 | 0.7499 | 0.825 |
68
- | 0.8308 | 8.9923 | 879 | 0.3339 | 0.9053 | 0.7463 | 0.8261 |
69
- | 0.7785 | 9.9233 | 970 | 0.3331 | 0.9056 | 0.7451 | 0.8259 |
70
-
71
-
72
  ### Framework versions
73
 
74
  - PEFT 0.13.0
 
2
  base_model: google/vit-base-patch16-224-in21k
3
  library_name: peft
4
  license: apache-2.0
 
 
5
  tags:
6
  - generated_from_trainer
7
  model-index:
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/sajjadi/Fast-PEFT/runs/tgqidpek)
16
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/sajjadi/Fast-PEFT/runs/tgqidpek)
17
  # vit-base-patch16-224-in21k-lora
18
 
19
  This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
 
 
 
 
 
20
 
21
  ## Model description
22
 
 
46
  - num_epochs: 10
47
  - mixed_precision_training: Native AMP
48
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
  ### Framework versions
50
 
51
  - PEFT 0.13.0
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e42d86d759a7759af6465b42c461c5fa15cea4e08e935b496395e7906ebbfb29
3
  size 384920
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6781ed1d21f611c841560e5440dc6aee5d87c62125cac9c3c0287d00db87c346
3
  size 384920