distilbert_lda_5_v1_book
This model is a fine-tuned version of on the gokulsrinivasagan/processed_book_corpus-ld-5 dataset. It achieves the following results on the evaluation set:
- Loss: 2.5846
- Accuracy: 0.7334
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 96
- eval_batch_size: 96
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 25
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
7.1144 | 0.4215 | 10000 | 6.9372 | 0.1643 |
4.7967 | 0.8431 | 20000 | 4.3544 | 0.4792 |
3.8152 | 1.2646 | 30000 | 3.5063 | 0.5921 |
3.5801 | 1.6861 | 40000 | 3.2866 | 0.6241 |
3.4416 | 2.1077 | 50000 | 3.1632 | 0.6409 |
3.3557 | 2.5292 | 60000 | 3.0762 | 0.6544 |
3.2898 | 2.9507 | 70000 | 3.0201 | 0.6625 |
3.2439 | 3.3723 | 80000 | 2.9767 | 0.6697 |
3.2059 | 3.7938 | 90000 | 2.9382 | 0.6752 |
3.1715 | 4.2153 | 100000 | 2.9128 | 0.6798 |
3.1509 | 4.6369 | 110000 | 2.8885 | 0.6834 |
3.1256 | 5.0584 | 120000 | 2.8609 | 0.6874 |
3.1038 | 5.4799 | 130000 | 2.8452 | 0.6902 |
3.0895 | 5.9014 | 140000 | 2.8255 | 0.6932 |
3.0671 | 6.3230 | 150000 | 2.8121 | 0.6954 |
3.0596 | 6.7445 | 160000 | 2.7992 | 0.6978 |
3.0371 | 7.1660 | 170000 | 2.7860 | 0.7002 |
3.0289 | 7.5876 | 180000 | 2.7773 | 0.7014 |
3.0178 | 8.0091 | 190000 | 2.7669 | 0.7029 |
3.0064 | 8.4306 | 200000 | 2.7545 | 0.7053 |
2.9931 | 8.8522 | 210000 | 2.7466 | 0.7063 |
2.9905 | 9.2737 | 220000 | 2.7372 | 0.7076 |
2.9751 | 9.6952 | 230000 | 2.7286 | 0.7091 |
2.9645 | 10.1168 | 240000 | 2.7234 | 0.7108 |
2.9627 | 10.5383 | 250000 | 2.7143 | 0.7116 |
2.9517 | 10.9598 | 260000 | 2.7073 | 0.7128 |
2.9439 | 11.3814 | 270000 | 2.7033 | 0.7135 |
2.944 | 11.8029 | 280000 | 2.6944 | 0.7151 |
2.9295 | 12.2244 | 290000 | 2.6887 | 0.7156 |
2.9263 | 12.6460 | 300000 | 2.6823 | 0.7172 |
2.9172 | 13.0675 | 310000 | 2.6772 | 0.7180 |
2.9126 | 13.4890 | 320000 | 2.6722 | 0.7187 |
2.9094 | 13.9106 | 330000 | 2.6684 | 0.7194 |
2.9054 | 14.3321 | 340000 | 2.6614 | 0.7204 |
2.8972 | 14.7536 | 350000 | 2.6573 | 0.7210 |
2.8931 | 15.1751 | 360000 | 2.6545 | 0.7217 |
2.8894 | 15.5967 | 370000 | 2.6468 | 0.7227 |
2.8841 | 16.0182 | 380000 | 2.6425 | 0.7235 |
2.8799 | 16.4397 | 390000 | 2.6386 | 0.7241 |
2.8742 | 16.8613 | 400000 | 2.6370 | 0.7245 |
2.8716 | 17.2828 | 410000 | 2.6301 | 0.7255 |
2.8658 | 17.7043 | 420000 | 2.6268 | 0.7263 |
2.8605 | 18.1259 | 430000 | 2.6263 | 0.7266 |
2.8549 | 18.5474 | 440000 | 2.6233 | 0.7268 |
2.8554 | 18.9689 | 450000 | 2.6175 | 0.7281 |
2.8499 | 19.3905 | 460000 | 2.6141 | 0.7286 |
2.8483 | 19.8120 | 470000 | 2.6111 | 0.7288 |
2.8417 | 20.2335 | 480000 | 2.6082 | 0.7296 |
2.8365 | 20.6551 | 490000 | 2.6040 | 0.7302 |
2.8332 | 21.0766 | 500000 | 2.6011 | 0.7304 |
2.8313 | 21.4981 | 510000 | 2.5988 | 0.7311 |
2.8267 | 21.9197 | 520000 | 2.5971 | 0.7314 |
2.8285 | 22.3412 | 530000 | 2.5954 | 0.7316 |
2.8236 | 22.7627 | 540000 | 2.5944 | 0.7319 |
2.8246 | 23.1843 | 550000 | 2.5907 | 0.7323 |
2.8196 | 23.6058 | 560000 | 2.5883 | 0.7329 |
2.8161 | 24.0273 | 570000 | 2.5877 | 0.7330 |
2.8153 | 24.4488 | 580000 | 2.5842 | 0.7335 |
2.8117 | 24.8704 | 590000 | 2.5848 | 0.7332 |
Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
- Downloads last month
- 14
Model tree for gokulsrinivasagan/distilbert_lda_5_v1_book
Dataset used to train gokulsrinivasagan/distilbert_lda_5_v1_book
Evaluation results
- Accuracy on gokulsrinivasagan/processed_book_corpus-ld-5self-reported0.733