hkivancoral's picture
End of training
597e4d7
metadata
license: apache-2.0
base_model: facebook/deit-tiny-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: hushem_1x_deit_tiny_sgd_lr0001_fold5
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.14634146341463414

hushem_1x_deit_tiny_sgd_lr0001_fold5

This model is a fine-tuned version of facebook/deit-tiny-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5789
  • Accuracy: 0.1463

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 6 1.6455 0.1220
1.6035 2.0 12 1.6420 0.1220
1.6035 3.0 18 1.6386 0.1463
1.6142 4.0 24 1.6353 0.1463
1.5857 5.0 30 1.6321 0.1463
1.5857 6.0 36 1.6289 0.1463
1.5718 7.0 42 1.6259 0.1463
1.5718 8.0 48 1.6232 0.1463
1.5833 9.0 54 1.6206 0.1463
1.5737 10.0 60 1.6178 0.1463
1.5737 11.0 66 1.6153 0.1463
1.5614 12.0 72 1.6128 0.1220
1.5614 13.0 78 1.6104 0.1220
1.5648 14.0 84 1.6081 0.1220
1.5575 15.0 90 1.6060 0.1220
1.5575 16.0 96 1.6040 0.1220
1.5452 17.0 102 1.6020 0.1220
1.5452 18.0 108 1.6002 0.1220
1.5768 19.0 114 1.5984 0.1220
1.5464 20.0 120 1.5966 0.1220
1.5464 21.0 126 1.5950 0.1220
1.5149 22.0 132 1.5934 0.1220
1.5149 23.0 138 1.5920 0.1220
1.6056 24.0 144 1.5905 0.1220
1.5161 25.0 150 1.5892 0.1220
1.5161 26.0 156 1.5879 0.1220
1.519 27.0 162 1.5868 0.1220
1.519 28.0 168 1.5857 0.1220
1.5531 29.0 174 1.5848 0.1220
1.5347 30.0 180 1.5839 0.1220
1.5347 31.0 186 1.5831 0.1220
1.5238 32.0 192 1.5824 0.1220
1.5238 33.0 198 1.5817 0.1463
1.5463 34.0 204 1.5811 0.1463
1.5219 35.0 210 1.5805 0.1463
1.5219 36.0 216 1.5800 0.1463
1.5056 37.0 222 1.5797 0.1463
1.5056 38.0 228 1.5794 0.1463
1.5505 39.0 234 1.5791 0.1463
1.5261 40.0 240 1.5790 0.1463
1.5261 41.0 246 1.5789 0.1463
1.5175 42.0 252 1.5789 0.1463
1.5175 43.0 258 1.5789 0.1463
1.5317 44.0 264 1.5789 0.1463
1.5241 45.0 270 1.5789 0.1463
1.5241 46.0 276 1.5789 0.1463
1.5533 47.0 282 1.5789 0.1463
1.5533 48.0 288 1.5789 0.1463
1.4945 49.0 294 1.5789 0.1463
1.5379 50.0 300 1.5789 0.1463

Framework versions

  • Transformers 4.35.0
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.6
  • Tokenizers 0.14.1