tanoManzo commited on
Commit
1d34259
·
verified ·
1 Parent(s): 204ff29

End of training

Browse files
README.md ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: AIRI-Institute/gena-lm-bert-base-t2t-multi
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - precision
7
+ - recall
8
+ - accuracy
9
+ model-index:
10
+ - name: gena-lm-bert-base-t2t-multi_ft_BioS2_1kbpHG19_DHSs_H3K27AC
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # gena-lm-bert-base-t2t-multi_ft_BioS2_1kbpHG19_DHSs_H3K27AC
18
+
19
+ This model is a fine-tuned version of [AIRI-Institute/gena-lm-bert-base-t2t-multi](https://huggingface.co/AIRI-Institute/gena-lm-bert-base-t2t-multi) on the None dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.4277
22
+ - F1 Score: 0.8561
23
+ - Precision: 0.8500
24
+ - Recall: 0.8623
25
+ - Accuracy: 0.8489
26
+ - Auc: 0.9251
27
+ - Prc: 0.9201
28
+
29
+ ## Model description
30
+
31
+ More information needed
32
+
33
+ ## Intended uses & limitations
34
+
35
+ More information needed
36
+
37
+ ## Training and evaluation data
38
+
39
+ More information needed
40
+
41
+ ## Training procedure
42
+
43
+ ### Training hyperparameters
44
+
45
+ The following hyperparameters were used during training:
46
+ - learning_rate: 1e-05
47
+ - train_batch_size: 8
48
+ - eval_batch_size: 8
49
+ - seed: 42
50
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
+ - lr_scheduler_type: linear
52
+ - num_epochs: 20
53
+ - mixed_precision_training: Native AMP
54
+
55
+ ### Training results
56
+
57
+ | Training Loss | Epoch | Step | Validation Loss | F1 Score | Precision | Recall | Accuracy | Auc | Prc |
58
+ |:-------------:|:------:|:-----:|:---------------:|:--------:|:---------:|:------:|:--------:|:------:|:------:|
59
+ | 0.6861 | 0.0840 | 500 | 0.6263 | 0.7527 | 0.6970 | 0.8181 | 0.7198 | 0.7933 | 0.7906 |
60
+ | 0.61 | 0.1681 | 1000 | 0.5437 | 0.7564 | 0.7861 | 0.7288 | 0.7553 | 0.8246 | 0.8187 |
61
+ | 0.5359 | 0.2521 | 1500 | 0.5051 | 0.8079 | 0.7281 | 0.9074 | 0.7751 | 0.8432 | 0.8117 |
62
+ | 0.5096 | 0.3361 | 2000 | 0.4892 | 0.8147 | 0.7565 | 0.8826 | 0.7908 | 0.8508 | 0.8303 |
63
+ | 0.5166 | 0.4202 | 2500 | 0.4910 | 0.8158 | 0.7826 | 0.8520 | 0.7995 | 0.8595 | 0.8390 |
64
+ | 0.4863 | 0.5042 | 3000 | 0.4927 | 0.8132 | 0.7947 | 0.8326 | 0.8007 | 0.8670 | 0.8477 |
65
+ | 0.4836 | 0.5882 | 3500 | 0.4798 | 0.8287 | 0.7639 | 0.9055 | 0.8049 | 0.8627 | 0.8281 |
66
+ | 0.4846 | 0.6723 | 4000 | 0.4813 | 0.8224 | 0.8097 | 0.8355 | 0.8119 | 0.8797 | 0.8650 |
67
+ | 0.4988 | 0.7563 | 4500 | 0.4682 | 0.8315 | 0.7652 | 0.9104 | 0.8077 | 0.8620 | 0.8114 |
68
+ | 0.4535 | 0.8403 | 5000 | 0.4601 | 0.8323 | 0.7559 | 0.9258 | 0.8055 | 0.8737 | 0.8357 |
69
+ | 0.4655 | 0.9244 | 5500 | 0.4682 | 0.8373 | 0.7758 | 0.9094 | 0.8158 | 0.8398 | 0.7843 |
70
+ | 0.4739 | 1.0084 | 6000 | 0.4595 | 0.8279 | 0.8125 | 0.8439 | 0.8171 | 0.8618 | 0.8175 |
71
+ | 0.4661 | 1.0924 | 6500 | 0.4579 | 0.8348 | 0.8128 | 0.8581 | 0.8230 | 0.8802 | 0.8433 |
72
+ | 0.4413 | 1.1765 | 7000 | 0.4459 | 0.8402 | 0.8074 | 0.8758 | 0.8264 | 0.8943 | 0.8728 |
73
+ | 0.4599 | 1.2605 | 7500 | 0.4549 | 0.8403 | 0.7791 | 0.9120 | 0.8193 | 0.8429 | 0.7711 |
74
+ | 0.4603 | 1.3445 | 8000 | 0.4376 | 0.8407 | 0.8005 | 0.8852 | 0.8252 | 0.8747 | 0.8255 |
75
+ | 0.4401 | 1.4286 | 8500 | 0.4470 | 0.8290 | 0.8327 | 0.8252 | 0.8225 | 0.8975 | 0.8731 |
76
+ | 0.4309 | 1.5126 | 9000 | 0.4513 | 0.8408 | 0.7844 | 0.9058 | 0.8212 | 0.8640 | 0.8115 |
77
+ | 0.4643 | 1.5966 | 9500 | 0.4325 | 0.8339 | 0.8125 | 0.8565 | 0.8222 | 0.8837 | 0.8522 |
78
+ | 0.4246 | 1.6807 | 10000 | 0.4652 | 0.8423 | 0.7673 | 0.9336 | 0.8178 | 0.8823 | 0.8426 |
79
+ | 0.4585 | 1.7647 | 10500 | 0.4088 | 0.8477 | 0.8111 | 0.8878 | 0.8338 | 0.9017 | 0.8758 |
80
+ | 0.4235 | 1.8487 | 11000 | 0.4284 | 0.8462 | 0.8157 | 0.8791 | 0.8334 | 0.9035 | 0.8788 |
81
+ | 0.423 | 1.9328 | 11500 | 0.4362 | 0.8460 | 0.8212 | 0.8723 | 0.8345 | 0.8984 | 0.8669 |
82
+ | 0.4216 | 2.0168 | 12000 | 0.4484 | 0.8514 | 0.7841 | 0.9313 | 0.8306 | 0.8837 | 0.8433 |
83
+ | 0.4262 | 2.1008 | 12500 | 0.4831 | 0.8479 | 0.8316 | 0.8649 | 0.8383 | 0.9083 | 0.8920 |
84
+ | 0.4131 | 2.1849 | 13000 | 0.4114 | 0.8521 | 0.8199 | 0.8868 | 0.8395 | 0.9153 | 0.9045 |
85
+ | 0.4333 | 2.2689 | 13500 | 0.4446 | 0.8535 | 0.7971 | 0.9184 | 0.8356 | 0.9030 | 0.8732 |
86
+ | 0.4291 | 2.3529 | 14000 | 0.4672 | 0.8505 | 0.8263 | 0.8762 | 0.8395 | 0.8993 | 0.8667 |
87
+ | 0.4132 | 2.4370 | 14500 | 0.4623 | 0.8535 | 0.8149 | 0.8958 | 0.8397 | 0.9000 | 0.8680 |
88
+ | 0.4209 | 2.5210 | 15000 | 0.4652 | 0.8506 | 0.8528 | 0.8484 | 0.8447 | 0.9057 | 0.8760 |
89
+ | 0.4407 | 2.6050 | 15500 | 0.4448 | 0.8540 | 0.8205 | 0.8904 | 0.8413 | 0.8800 | 0.8303 |
90
+ | 0.4208 | 2.6891 | 16000 | 0.4324 | 0.8565 | 0.8287 | 0.8862 | 0.8452 | 0.9012 | 0.8662 |
91
+ | 0.4183 | 2.7731 | 16500 | 0.4271 | 0.8542 | 0.8023 | 0.9133 | 0.8375 | 0.9012 | 0.8654 |
92
+ | 0.39 | 2.8571 | 17000 | 0.4633 | 0.8533 | 0.8278 | 0.8804 | 0.8422 | 0.9133 | 0.8936 |
93
+ | 0.435 | 2.9412 | 17500 | 0.4188 | 0.8554 | 0.8056 | 0.9116 | 0.8393 | 0.9208 | 0.9114 |
94
+ | 0.3791 | 3.0252 | 18000 | 0.4910 | 0.8494 | 0.8624 | 0.8368 | 0.8454 | 0.9229 | 0.9190 |
95
+ | 0.4055 | 3.1092 | 18500 | 0.4277 | 0.8561 | 0.8500 | 0.8623 | 0.8489 | 0.9251 | 0.9201 |
96
+
97
+
98
+ ### Framework versions
99
+
100
+ - Transformers 4.42.3
101
+ - Pytorch 2.3.0+cu121
102
+ - Datasets 2.18.0
103
+ - Tokenizers 0.19.0
config.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "AIRI-Institute/gena-lm-bert-base-t2t-multi",
3
+ "architectures": [
4
+ "BertForSequenceClassification"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "auto_map": {
8
+ "AutoModel": "AIRI-Institute/gena-lm-bert-base-t2t-multi--modeling_bert.BertForMaskedLM"
9
+ },
10
+ "classifier_dropout": null,
11
+ "gradient_checkpointing": false,
12
+ "hidden_act": "gelu",
13
+ "hidden_dropout_prob": 0.1,
14
+ "hidden_size": 768,
15
+ "initializer_range": 0.02,
16
+ "intermediate_size": 3072,
17
+ "last_layer_norm": false,
18
+ "layer_norm_eps": 1e-12,
19
+ "max_position_embeddings": 512,
20
+ "model_type": "bert",
21
+ "num_attention_heads": 12,
22
+ "num_hidden_layers": 12,
23
+ "pad_token_id": 3,
24
+ "position_embedding_type": "absolute",
25
+ "pre_layer_norm": true,
26
+ "problem_type": "single_label_classification",
27
+ "torch_dtype": "float32",
28
+ "transformers_version": "4.42.3",
29
+ "type_vocab_size": 2,
30
+ "use_cache": true,
31
+ "vocab_size": 32000
32
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3e4b8e568ca461f57ea17939df59c372cdf26328bfaaa9be9bbde221cb36f589
3
+ size 442503040
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[UNK]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "1": {
12
+ "content": "[CLS]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "2": {
20
+ "content": "[SEP]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "3": {
28
+ "content": "[PAD]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "4": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ },
43
+ "5": {
44
+ "content": "-",
45
+ "lstrip": false,
46
+ "normalized": false,
47
+ "rstrip": false,
48
+ "single_word": false,
49
+ "special": true
50
+ }
51
+ },
52
+ "clean_up_tokenization_spaces": true,
53
+ "cls_token": "[CLS]",
54
+ "mask_token": "[MASK]",
55
+ "model_max_length": 1000000000000000019884624838656,
56
+ "pad_token": "[PAD]",
57
+ "sep_token": "[SEP]",
58
+ "tokenizer_class": "PreTrainedTokenizerFast",
59
+ "unk_token": "[UNK]"
60
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dbe0741ca8634723f969e5ecf0cb068c51956643b062372baaf3ac09cdfe3d9e
3
+ size 5240