hkivancoral's picture
End of training
04adae9
metadata
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: hushem_1x_deit_small_adamax_001_fold1
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.4666666666666667

hushem_1x_deit_small_adamax_001_fold1

This model is a fine-tuned version of facebook/deit-small-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 2.0215
  • Accuracy: 0.4667

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 6 2.2870 0.2444
2.1668 2.0 12 1.4669 0.2444
2.1668 3.0 18 1.4980 0.2444
1.4102 4.0 24 1.4751 0.2444
1.4394 5.0 30 1.4286 0.2444
1.4394 6.0 36 1.6019 0.2444
1.3171 7.0 42 1.7291 0.2222
1.3171 8.0 48 1.5314 0.3556
1.2906 9.0 54 1.7281 0.2667
1.2151 10.0 60 1.6012 0.2444
1.2151 11.0 66 1.5621 0.4444
1.1016 12.0 72 1.5069 0.2
1.1016 13.0 78 1.5452 0.4222
1.1085 14.0 84 1.5457 0.2889
0.9838 15.0 90 1.7131 0.4
0.9838 16.0 96 1.9947 0.2889
1.003 17.0 102 1.7538 0.4222
1.003 18.0 108 1.3632 0.4444
0.846 19.0 114 1.7633 0.4
0.7432 20.0 120 1.5259 0.4222
0.7432 21.0 126 1.6982 0.4
0.8111 22.0 132 1.4722 0.4
0.8111 23.0 138 1.5772 0.4222
0.6268 24.0 144 1.6621 0.4222
0.5956 25.0 150 2.2283 0.4
0.5956 26.0 156 1.5965 0.4667
0.863 27.0 162 2.0067 0.4
0.863 28.0 168 2.2609 0.3778
0.575 29.0 174 1.7339 0.4222
0.3505 30.0 180 1.6059 0.3778
0.3505 31.0 186 1.7578 0.4444
0.3884 32.0 192 1.8785 0.4444
0.3884 33.0 198 1.5952 0.4222
0.3742 34.0 204 1.9834 0.4444
0.3113 35.0 210 1.8134 0.4222
0.3113 36.0 216 2.1491 0.4
0.4478 37.0 222 1.9419 0.4667
0.4478 38.0 228 1.8426 0.4444
0.1746 39.0 234 1.9349 0.4222
0.1737 40.0 240 2.0085 0.4667
0.1737 41.0 246 2.0238 0.4667
0.1448 42.0 252 2.0215 0.4667
0.1448 43.0 258 2.0215 0.4667
0.1495 44.0 264 2.0215 0.4667
0.1326 45.0 270 2.0215 0.4667
0.1326 46.0 276 2.0215 0.4667
0.1487 47.0 282 2.0215 0.4667
0.1487 48.0 288 2.0215 0.4667
0.1112 49.0 294 2.0215 0.4667
0.1501 50.0 300 2.0215 0.4667

Framework versions

  • Transformers 4.35.0
  • Pytorch 2.1.0+cu118
  • Datasets 2.14.6
  • Tokenizers 0.14.1