File size: 3,364 Bytes
4f3e370 e944e32 4f3e370 e944e32 4f3e370 e944e32 4f3e370 e944e32 4f3e370 e944e32 4f3e370 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 |
---
library_name: transformers
language:
- en
base_model: gokulsrinivasagan/bert_base_lda_20_v1_book
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: bert_base_lda_20_v1_book_stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: 0.8381999392722225
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_lda_20_v1_book_stsb
This model is a fine-tuned version of [gokulsrinivasagan/bert_base_lda_20_v1_book](https://huggingface.co/gokulsrinivasagan/bert_base_lda_20_v1_book) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6650
- Pearson: 0.8407
- Spearmanr: 0.8382
- Combined Score: 0.8394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 2.8738 | 1.0 | 23 | 2.4670 | 0.1765 | 0.1748 | 0.1756 |
| 1.4719 | 2.0 | 46 | 1.0280 | 0.7397 | 0.7404 | 0.7401 |
| 0.9801 | 3.0 | 69 | 0.8276 | 0.7956 | 0.7954 | 0.7955 |
| 0.783 | 4.0 | 92 | 0.7431 | 0.8197 | 0.8193 | 0.8195 |
| 0.5677 | 5.0 | 115 | 0.9075 | 0.8135 | 0.8152 | 0.8144 |
| 0.4407 | 6.0 | 138 | 0.7474 | 0.8267 | 0.8272 | 0.8269 |
| 0.3821 | 7.0 | 161 | 0.6753 | 0.8391 | 0.8371 | 0.8381 |
| 0.3036 | 8.0 | 184 | 0.8726 | 0.8246 | 0.8260 | 0.8253 |
| 0.269 | 9.0 | 207 | 0.7331 | 0.8311 | 0.8293 | 0.8302 |
| 0.2191 | 10.0 | 230 | 0.7562 | 0.8383 | 0.8368 | 0.8375 |
| 0.1854 | 11.0 | 253 | 0.7022 | 0.8365 | 0.8343 | 0.8354 |
| 0.1718 | 12.0 | 276 | 0.6650 | 0.8407 | 0.8382 | 0.8394 |
| 0.1685 | 13.0 | 299 | 0.7270 | 0.8350 | 0.8333 | 0.8342 |
| 0.1368 | 14.0 | 322 | 0.7532 | 0.8392 | 0.8376 | 0.8384 |
| 0.1351 | 15.0 | 345 | 0.8710 | 0.8379 | 0.8379 | 0.8379 |
| 0.1459 | 16.0 | 368 | 0.7801 | 0.8416 | 0.8398 | 0.8407 |
| 0.106 | 17.0 | 391 | 0.6833 | 0.8393 | 0.8380 | 0.8387 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
|