hkivancoral's picture
End of training
4ecec0a
metadata
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: hushem_1x_deit_tiny_sgd_lr0001_fold1
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.24444444444444444

hushem_1x_deit_tiny_sgd_lr0001_fold1

This model is a fine-tuned version of facebook/deit-tiny-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5010
  • Accuracy: 0.2444

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 6 1.5317 0.2667
1.5768 2.0 12 1.5295 0.2444
1.5768 3.0 18 1.5277 0.2444
1.5217 4.0 24 1.5259 0.2444
1.5806 5.0 30 1.5243 0.2444
1.5806 6.0 36 1.5226 0.2444
1.5608 7.0 42 1.5211 0.2444
1.5608 8.0 48 1.5198 0.2444
1.5538 9.0 54 1.5184 0.2444
1.5354 10.0 60 1.5172 0.2444
1.5354 11.0 66 1.5159 0.2444
1.5529 12.0 72 1.5148 0.2444
1.5529 13.0 78 1.5137 0.2444
1.5094 14.0 84 1.5127 0.2444
1.5228 15.0 90 1.5118 0.2444
1.5228 16.0 96 1.5108 0.2444
1.5295 17.0 102 1.5100 0.2444
1.5295 18.0 108 1.5092 0.2444
1.5298 19.0 114 1.5084 0.2444
1.5372 20.0 120 1.5077 0.2444
1.5372 21.0 126 1.5071 0.2444
1.5336 22.0 132 1.5065 0.2444
1.5336 23.0 138 1.5059 0.2444
1.5077 24.0 144 1.5053 0.2444
1.5022 25.0 150 1.5049 0.2444
1.5022 26.0 156 1.5044 0.2444
1.5158 27.0 162 1.5040 0.2444
1.5158 28.0 168 1.5036 0.2444
1.4961 29.0 174 1.5032 0.2444
1.5155 30.0 180 1.5029 0.2444
1.5155 31.0 186 1.5025 0.2444
1.5093 32.0 192 1.5022 0.2444
1.5093 33.0 198 1.5020 0.2444
1.4596 34.0 204 1.5017 0.2444
1.4894 35.0 210 1.5015 0.2444
1.4894 36.0 216 1.5014 0.2444
1.5058 37.0 222 1.5012 0.2444
1.5058 38.0 228 1.5011 0.2444
1.4675 39.0 234 1.5010 0.2444
1.4822 40.0 240 1.5010 0.2444
1.4822 41.0 246 1.5010 0.2444
1.5008 42.0 252 1.5010 0.2444
1.5008 43.0 258 1.5010 0.2444
1.5075 44.0 264 1.5010 0.2444
1.5338 45.0 270 1.5010 0.2444
1.5338 46.0 276 1.5010 0.2444
1.5016 47.0 282 1.5010 0.2444
1.5016 48.0 288 1.5010 0.2444
1.4777 49.0 294 1.5010 0.2444
1.4813 50.0 300 1.5010 0.2444

Framework versions

  • Transformers 4.35.0
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.6
  • Tokenizers 0.14.1