hkivancoral's picture
End of training
51df94f
metadata
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: smids_1x_deit_small_rms_0001_fold1
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.6811352253756261

smids_1x_deit_small_rms_0001_fold1

This model is a fine-tuned version of facebook/deit-small-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7301
  • Accuracy: 0.6811

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.1511 1.0 76 1.0039 0.4290
0.9396 2.0 152 0.9980 0.4658
0.9392 3.0 228 1.1592 0.3239
0.9832 4.0 304 1.0157 0.4791
0.9342 5.0 380 0.9184 0.4725
0.951 6.0 456 0.9262 0.4958
1.1061 7.0 532 1.1999 0.3406
0.8983 8.0 608 1.5626 0.4207
0.8399 9.0 684 0.8862 0.5242
0.7906 10.0 760 2.9194 0.3255
0.9054 11.0 836 0.8409 0.5476
0.8842 12.0 912 0.8563 0.5409
0.8173 13.0 988 0.9009 0.4958
0.8653 14.0 1064 0.8617 0.5476
0.7859 15.0 1140 0.8470 0.5109
0.7904 16.0 1216 0.8290 0.6027
0.8076 17.0 1292 1.0668 0.5326
0.7582 18.0 1368 0.8092 0.5776
0.8375 19.0 1444 0.8034 0.5927
0.817 20.0 1520 0.8094 0.5593
0.7636 21.0 1596 0.8786 0.6060
0.7574 22.0 1672 0.7805 0.6093
0.7196 23.0 1748 0.8013 0.6227
0.746 24.0 1824 0.9940 0.5492
0.698 25.0 1900 0.7894 0.6227
0.7416 26.0 1976 0.7704 0.6177
0.7441 27.0 2052 0.7868 0.6110
0.7488 28.0 2128 0.7854 0.6294
0.6844 29.0 2204 0.7483 0.6394
0.7046 30.0 2280 0.7522 0.6144
0.7612 31.0 2356 0.7237 0.6811
0.7095 32.0 2432 0.7781 0.6060
0.7219 33.0 2508 0.7248 0.6477
0.7697 34.0 2584 0.7404 0.6394
0.7924 35.0 2660 0.7779 0.6077
0.6939 36.0 2736 0.7018 0.6628
0.7175 37.0 2812 0.7115 0.6711
0.663 38.0 2888 0.7095 0.6594
0.7209 39.0 2964 0.7131 0.6761
0.6707 40.0 3040 0.7148 0.6745
0.6033 41.0 3116 0.7278 0.6761
0.6657 42.0 3192 0.7175 0.6745
0.5768 43.0 3268 0.7542 0.6611
0.608 44.0 3344 0.7272 0.6811
0.5917 45.0 3420 0.7194 0.6795
0.6179 46.0 3496 0.7229 0.6828
0.5513 47.0 3572 0.7301 0.6861
0.5669 48.0 3648 0.7286 0.6845
0.4852 49.0 3724 0.7286 0.6811
0.6153 50.0 3800 0.7301 0.6811

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu118
  • Datasets 2.15.0
  • Tokenizers 0.15.0