bert_tiny_lda_book

This model is a fine-tuned version of on the gokulsrinivasagan/processed_book_corpus-ld dataset. It achieves the following results on the evaluation set:

  • Loss: 3.3915
  • Accuracy: 0.6827

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 160
  • eval_batch_size: 160
  • seed: 10
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 10000
  • num_epochs: 25

Training results

Training Loss Epoch Step Validation Loss Accuracy
7.9444 0.7025 10000 7.7823 0.1644
5.4537 1.4051 20000 5.0087 0.4658
4.6397 2.1076 30000 4.2650 0.5607
4.3898 2.8102 40000 4.0379 0.5916
4.2383 3.5127 50000 3.8978 0.6113
4.1379 4.2153 60000 3.8117 0.6234
4.0736 4.9178 70000 3.7462 0.6324
4.0187 5.6203 80000 3.6985 0.6391
3.9803 6.3229 90000 3.6644 0.6444
3.9462 7.0254 100000 3.6333 0.6485
3.9217 7.7280 110000 3.6064 0.6526
3.8974 8.4305 120000 3.5810 0.6558
3.8714 9.1331 130000 3.5696 0.6581
3.8565 9.8356 140000 3.5454 0.6613
3.8382 10.5381 150000 3.5310 0.6632
3.8272 11.2407 160000 3.5181 0.6647
3.8059 11.9432 170000 3.5012 0.6666
3.7935 12.6458 180000 3.4849 0.6683
3.7815 13.3483 190000 3.4784 0.6695
3.7719 14.0509 200000 3.4671 0.6710
3.7614 14.7534 210000 3.4574 0.6724
3.7509 15.4560 220000 3.4488 0.6740
3.7456 16.1585 230000 3.4445 0.6745
3.736 16.8610 240000 3.4378 0.6753
3.728 17.5636 250000 3.4330 0.6763
3.7223 18.2661 260000 3.4270 0.6772
3.7195 18.9687 270000 3.4210 0.6780
3.7104 19.6712 280000 3.4156 0.6790
3.7086 20.3738 290000 3.4105 0.6797
3.7002 21.0763 300000 3.4070 0.6803
3.698 21.7788 310000 3.4013 0.6812
3.6915 22.4814 320000 3.3987 0.6814
3.6909 23.1839 330000 3.3962 0.6818
3.6883 23.8865 340000 3.3933 0.6825
3.6867 24.5890 350000 3.3903 0.6829

Framework versions

  • Transformers 4.46.1
  • Pytorch 2.2.0+cu121
  • Datasets 3.1.0
  • Tokenizers 0.20.1
Downloads last month
33
Safetensors
Model size
33.3M params
Tensor type
F32
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for gokulsrinivasagan/bert_tiny_lda_book

Finetunes
9 models

Dataset used to train gokulsrinivasagan/bert_tiny_lda_book

Evaluation results

  • Accuracy on gokulsrinivasagan/processed_book_corpus-ld
    self-reported
    0.683