JEdward7777's picture
update model card README.md
d71def9
|
raw
history blame
5.46 kB
metadata
license: apache-2.0
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: delivery_truck_classification
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: train
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.9466666666666667

delivery_truck_classification

This model is a fine-tuned version of microsoft/swin-tiny-patch4-window7-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1463
  • Accuracy: 0.9467

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 60

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 0.91 5 2.1248 0.0667
No log 1.91 10 1.9221 0.24
No log 2.91 15 1.7177 0.32
2.0123 3.91 20 1.5490 0.4267
2.0123 4.91 25 1.3192 0.5333
2.0123 5.91 30 1.0764 0.64
2.0123 6.91 35 0.8421 0.76
1.3539 7.91 40 0.6504 0.8267
1.3539 8.91 45 0.5243 0.8667
1.3539 9.91 50 0.4282 0.88
1.3539 10.91 55 0.3950 0.9067
0.7315 11.91 60 0.3617 0.8933
0.7315 12.91 65 0.3167 0.9067
0.7315 13.91 70 0.3023 0.9067
0.7315 14.91 75 0.2440 0.9333
0.5713 15.91 80 0.2475 0.9333
0.5713 16.91 85 0.2443 0.92
0.5713 17.91 90 0.2093 0.96
0.5713 18.91 95 0.2077 0.9467
0.515 19.91 100 0.2124 0.9333
0.515 20.91 105 0.2166 0.96
0.515 21.91 110 0.1940 0.9333
0.515 22.91 115 0.1984 0.9333
0.4582 23.91 120 0.2395 0.9333
0.4582 24.91 125 0.2480 0.92
0.4582 25.91 130 0.2180 0.92
0.4582 26.91 135 0.2232 0.9333
0.4279 27.91 140 0.1977 0.9333
0.4279 28.91 145 0.1847 0.9467
0.4279 29.91 150 0.1922 0.9467
0.4279 30.91 155 0.1787 0.9733
0.4031 31.91 160 0.1626 0.9733
0.4031 32.91 165 0.1667 0.9733
0.4031 33.91 170 0.1871 0.9733
0.4031 34.91 175 0.2015 0.9733
0.3952 35.91 180 0.1836 0.9733
0.3952 36.91 185 0.1856 0.96
0.3952 37.91 190 0.1952 0.9333
0.3952 38.91 195 0.1721 0.96
0.369 39.91 200 0.1619 0.9467
0.369 40.91 205 0.1659 0.96
0.369 41.91 210 0.1569 0.96
0.369 42.91 215 0.1358 0.96
0.3262 43.91 220 0.1371 0.96
0.3262 44.91 225 0.1337 0.9467
0.3262 45.91 230 0.1374 0.9467
0.3262 46.91 235 0.1789 0.96
0.3616 47.91 240 0.2167 0.9467
0.3616 48.91 245 0.1757 0.96
0.3616 49.91 250 0.1729 0.9733
0.3616 50.91 255 0.1722 0.9733
0.303 51.91 260 0.1601 0.9733
0.303 52.91 265 0.1592 0.9733
0.303 53.91 270 0.1613 0.9733
0.303 54.91 275 0.1575 0.9733
0.305 55.91 280 0.1559 0.9733
0.305 56.91 285 0.1489 0.9733
0.305 57.91 290 0.1464 0.96
0.305 58.91 295 0.1463 0.9467
0.3328 59.91 300 0.1463 0.9467

Framework versions

  • Transformers 4.26.0
  • Pytorch 1.13.1+cu116
  • Datasets 2.9.0
  • Tokenizers 0.13.2