griffio commited on
Commit
86f8f62
·
verified ·
1 Parent(s): ea5a78e

Model save

Browse files
README.md CHANGED
@@ -3,7 +3,6 @@ library_name: transformers
3
  license: apache-2.0
4
  base_model: google/vit-large-patch16-224
5
  tags:
6
- - image-classification
7
  - generated_from_trainer
8
  datasets:
9
  - imagefolder
@@ -16,7 +15,7 @@ model-index:
16
  name: Image Classification
17
  type: image-classification
18
  dataset:
19
- name: dungeon-geo-morphs
20
  type: imagefolder
21
  config: default
22
  split: validation
@@ -32,9 +31,9 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  # vit-large-patch16-224-dungeon-geo-morphs-004
34
 
35
- This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the dungeon-geo-morphs dataset.
36
  It achieves the following results on the evaluation set:
37
- - Loss: 0.0950
38
  - Accuracy: 0.9444
39
 
40
  ## Model description
@@ -63,16 +62,16 @@ The following hyperparameters were used during training:
63
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
64
  - lr_scheduler_type: linear
65
  - lr_scheduler_warmup_ratio: 0.1
66
- - num_epochs: 30
67
  - mixed_precision_training: Native AMP
68
 
69
  ### Training results
70
 
71
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
72
  |:-------------:|:-------:|:----:|:---------------:|:--------:|
73
- | 0.7202 | 6.5714 | 10 | 0.2302 | 0.9444 |
74
- | 0.1296 | 13.2857 | 20 | 0.1131 | 0.9444 |
75
- | 0.0385 | 19.8571 | 30 | 0.0950 | 0.9444 |
76
 
77
 
78
  ### Framework versions
 
3
  license: apache-2.0
4
  base_model: google/vit-large-patch16-224
5
  tags:
 
6
  - generated_from_trainer
7
  datasets:
8
  - imagefolder
 
15
  name: Image Classification
16
  type: image-classification
17
  dataset:
18
+ name: imagefolder
19
  type: imagefolder
20
  config: default
21
  split: validation
 
31
 
32
  # vit-large-patch16-224-dungeon-geo-morphs-004
33
 
34
+ This model is a fine-tuned version of [google/vit-large-patch16-224](https://huggingface.co/google/vit-large-patch16-224) on the imagefolder dataset.
35
  It achieves the following results on the evaluation set:
36
+ - Loss: 0.0895
37
  - Accuracy: 0.9444
38
 
39
  ## Model description
 
62
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
63
  - lr_scheduler_type: linear
64
  - lr_scheduler_warmup_ratio: 0.1
65
+ - num_epochs: 35
66
  - mixed_precision_training: Native AMP
67
 
68
  ### Training results
69
 
70
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
71
  |:-------------:|:-------:|:----:|:---------------:|:--------:|
72
+ | 0.8954 | 5.7143 | 10 | 0.3421 | 0.9444 |
73
+ | 0.2087 | 11.4286 | 20 | 0.1405 | 0.9444 |
74
+ | 0.062 | 17.1429 | 30 | 0.0895 | 0.9444 |
75
 
76
 
77
  ### Framework versions
runs/Nov14_17-23-59_427600cacc83/events.out.tfevents.1731605074.427600cacc83.211.7 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:38435a1b47e5ecf6cdb7fbfd28615b9c060e19468959fb6a3c9842caa9f067ec
3
- size 6759
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4fa41bc055aa32d9780618cea0a06e1ad4e5c47013b27bc418b79a03d7326c00
3
+ size 7107