--- library_name: transformers base_model: gokulsrinivasagan/bert_base_lda_100_v1 tags: - generated_from_trainer metrics: - accuracy model-index: - name: bert_base_lda_100_v1_wnli results: [] --- # bert_base_lda_100_v1_wnli This model is a fine-tuned version of [gokulsrinivasagan/bert_base_lda_100_v1](https://huggingface.co/gokulsrinivasagan/bert_base_lda_100_v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6907 - Accuracy: 0.5634 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 256 - eval_batch_size: 256 - seed: 10 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9555 | 1.0 | 3 | 1.7839 | 0.5634 | | 1.287 | 2.0 | 6 | 1.9711 | 0.5634 | | 1.9411 | 3.0 | 9 | 0.7586 | 0.5634 | | 0.8928 | 4.0 | 12 | 0.9855 | 0.4366 | | 0.8147 | 5.0 | 15 | 0.7992 | 0.5634 | | 0.8064 | 6.0 | 18 | 0.6987 | 0.4366 | | 0.7033 | 7.0 | 21 | 0.6914 | 0.5634 | | 0.7235 | 8.0 | 24 | 0.6867 | 0.5634 | | 0.701 | 9.0 | 27 | 0.7205 | 0.4366 | | 0.6954 | 10.0 | 30 | 0.6856 | 0.5634 | | 0.6999 | 11.0 | 33 | 0.6916 | 0.5634 | | 0.7008 | 12.0 | 36 | 0.7042 | 0.4366 | | 0.6948 | 13.0 | 39 | 0.6856 | 0.5634 | | 0.6948 | 14.0 | 42 | 0.6864 | 0.5634 | | 0.6946 | 15.0 | 45 | 0.6907 | 0.5634 | ### Framework versions - Transformers 4.46.3 - Pytorch 2.2.1+cu118 - Datasets 2.17.0 - Tokenizers 0.20.3