File size: 4,053 Bytes
9a0f34c
 
 
 
708b134
 
9a0f34c
 
 
 
708b134
 
 
 
 
 
 
 
 
 
 
9a0f34c
 
 
 
 
 
 
708b134
9a0f34c
708b134
 
9a0f34c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
---
library_name: transformers
tags:
- generated_from_trainer
datasets:
- gokulsrinivasagan/processed_book_corpus-ld-20
metrics:
- accuracy
model-index:
- name: bert_tiny_lda_20_v1_book
  results:
  - task:
      name: Masked Language Modeling
      type: fill-mask
    dataset:
      name: gokulsrinivasagan/processed_book_corpus-ld-20
      type: gokulsrinivasagan/processed_book_corpus-ld-20
    metrics:
    - name: Accuracy
      type: accuracy
      value: 0.6793461178036027
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# bert_tiny_lda_20_v1_book

This model is a fine-tuned version of [](https://huggingface.co/) on the gokulsrinivasagan/processed_book_corpus-ld-20 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8712
- Accuracy: 0.6793

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 160
- eval_batch_size: 160
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- num_epochs: 25

### Training results

| Training Loss | Epoch   | Step   | Validation Loss | Accuracy |
|:-------------:|:-------:|:------:|:---------------:|:--------:|
| 8.6049        | 0.7025  | 10000  | 8.4377          | 0.1653   |
| 5.6593        | 1.4051  | 20000  | 5.2243          | 0.5031   |
| 5.1606        | 2.1076  | 30000  | 4.7778          | 0.5589   |
| 4.9169        | 2.8102  | 40000  | 4.5539          | 0.5885   |
| 4.7607        | 3.5127  | 50000  | 4.4085          | 0.6088   |
| 4.6405        | 4.2153  | 60000  | 4.3027          | 0.6216   |
| 4.5578        | 4.9178  | 70000  | 4.2156          | 0.6307   |
| 4.496         | 5.6203  | 80000  | 4.1619          | 0.6375   |
| 4.457         | 6.3229  | 90000  | 4.1256          | 0.6425   |
| 4.4199        | 7.0254  | 100000 | 4.0918          | 0.6468   |
| 4.3953        | 7.7280  | 110000 | 4.0677          | 0.6504   |
| 4.3703        | 8.4305  | 120000 | 4.0441          | 0.6538   |
| 4.3437        | 9.1331  | 130000 | 4.0295          | 0.6560   |
| 4.3295        | 9.8356  | 140000 | 4.0084          | 0.6594   |
| 4.3125        | 10.5381 | 150000 | 3.9955          | 0.6612   |
| 4.3048        | 11.2407 | 160000 | 3.9842          | 0.6627   |
| 4.2863        | 11.9432 | 170000 | 3.9727          | 0.6645   |
| 4.276         | 12.6458 | 180000 | 3.9592          | 0.6663   |
| 4.2651        | 13.3483 | 190000 | 3.9543          | 0.6669   |
| 4.2573        | 14.0509 | 200000 | 3.9438          | 0.6683   |
| 4.247         | 14.7534 | 210000 | 3.9343          | 0.6699   |
| 4.2387        | 15.4560 | 220000 | 3.9274          | 0.6712   |
| 4.2331        | 16.1585 | 230000 | 3.9226          | 0.6718   |
| 4.2238        | 16.8610 | 240000 | 3.9161          | 0.6727   |
| 4.2171        | 17.5636 | 250000 | 3.9106          | 0.6735   |
| 4.2098        | 18.2661 | 260000 | 3.9046          | 0.6740   |
| 4.2083        | 18.9687 | 270000 | 3.9001          | 0.6749   |
| 4.1991        | 19.6712 | 280000 | 3.8949          | 0.6759   |
| 4.1961        | 20.3738 | 290000 | 3.8903          | 0.6766   |
| 4.1893        | 21.0763 | 300000 | 3.8864          | 0.6772   |
| 4.1866        | 21.7788 | 310000 | 3.8808          | 0.6779   |
| 4.181         | 22.4814 | 320000 | 3.8782          | 0.6784   |
| 4.18          | 23.1839 | 330000 | 3.8763          | 0.6785   |
| 4.1771        | 23.8865 | 340000 | 3.8731          | 0.6790   |
| 4.1765        | 24.5890 | 350000 | 3.8704          | 0.6795   |


### Framework versions

- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3