gena-lm-bert-base-t2t-multi_ft_BioS2_1kbpHG19_DHSs_H3K27AC
This model is a fine-tuned version of AIRI-Institute/gena-lm-bert-base-t2t-multi on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.4666
- F1 Score: 0.8442
- Precision: 0.8154
- Recall: 0.8751
- Accuracy: 0.8333
- Auc: 0.8995
- Prc: 0.8745
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | F1 Score | Precision | Recall | Accuracy | Auc | Prc |
---|---|---|---|---|---|---|---|---|---|
0.6931 | 0.0839 | 500 | 0.6427 | 0.7390 | 0.6242 | 0.9057 | 0.6699 | 0.7821 | 0.7840 |
0.6219 | 0.1679 | 1000 | 0.5689 | 0.7390 | 0.7963 | 0.6893 | 0.7487 | 0.8086 | 0.7935 |
0.5477 | 0.2518 | 1500 | 0.5181 | 0.7949 | 0.7556 | 0.8386 | 0.7767 | 0.8398 | 0.8123 |
0.5125 | 0.3358 | 2000 | 0.4999 | 0.8031 | 0.7565 | 0.8559 | 0.7834 | 0.8496 | 0.8161 |
0.4961 | 0.4197 | 2500 | 0.5177 | 0.8055 | 0.7647 | 0.8510 | 0.7879 | 0.8371 | 0.7976 |
0.4969 | 0.5037 | 3000 | 0.4908 | 0.8153 | 0.7495 | 0.8936 | 0.7910 | 0.8582 | 0.8306 |
0.4777 | 0.5876 | 3500 | 0.4991 | 0.8196 | 0.7549 | 0.8966 | 0.7963 | 0.8630 | 0.8307 |
0.4836 | 0.6716 | 4000 | 0.4718 | 0.8230 | 0.7577 | 0.9005 | 0.8000 | 0.8608 | 0.8324 |
0.4748 | 0.7555 | 4500 | 0.5299 | 0.7963 | 0.8009 | 0.7918 | 0.7910 | 0.8567 | 0.8069 |
0.4667 | 0.8395 | 5000 | 0.4743 | 0.8241 | 0.7622 | 0.8969 | 0.8024 | 0.8715 | 0.8450 |
0.4717 | 0.9234 | 5500 | 0.4981 | 0.8105 | 0.8091 | 0.8120 | 0.8041 | 0.8772 | 0.8542 |
0.4707 | 1.0074 | 6000 | 0.4675 | 0.8273 | 0.7656 | 0.8998 | 0.8061 | 0.8751 | 0.8371 |
0.459 | 1.0913 | 6500 | 0.4867 | 0.8192 | 0.8012 | 0.8380 | 0.8091 | 0.8778 | 0.8546 |
0.4544 | 1.1753 | 7000 | 0.4712 | 0.8322 | 0.7557 | 0.9258 | 0.8073 | 0.8264 | 0.7430 |
0.4324 | 1.2592 | 7500 | 0.4993 | 0.8185 | 0.8147 | 0.8224 | 0.8118 | 0.8687 | 0.8163 |
0.436 | 1.3432 | 8000 | 0.4777 | 0.8352 | 0.7641 | 0.9209 | 0.8125 | 0.8185 | 0.7469 |
0.4464 | 1.4271 | 8500 | 0.5148 | 0.8299 | 0.7497 | 0.9294 | 0.8034 | 0.8729 | 0.8419 |
0.4537 | 1.5111 | 9000 | 0.4503 | 0.8296 | 0.8028 | 0.8582 | 0.8180 | 0.8796 | 0.8409 |
0.4276 | 1.5950 | 9500 | 0.4540 | 0.8356 | 0.8014 | 0.8728 | 0.8227 | 0.8926 | 0.8680 |
0.4323 | 1.6790 | 10000 | 0.4512 | 0.8380 | 0.7949 | 0.8861 | 0.8232 | 0.8748 | 0.8222 |
0.4384 | 1.7629 | 10500 | 0.4724 | 0.8386 | 0.7655 | 0.9271 | 0.8158 | 0.8836 | 0.8405 |
0.4076 | 1.8469 | 11000 | 0.4626 | 0.8335 | 0.8204 | 0.8471 | 0.8254 | 0.8813 | 0.8340 |
0.439 | 1.9308 | 11500 | 0.4399 | 0.8443 | 0.7807 | 0.9193 | 0.8251 | 0.8888 | 0.8487 |
0.4164 | 2.0148 | 12000 | 0.4522 | 0.8437 | 0.7820 | 0.9161 | 0.8249 | 0.8940 | 0.8548 |
0.4075 | 2.0987 | 12500 | 0.4718 | 0.8417 | 0.8069 | 0.8796 | 0.8292 | 0.8962 | 0.8771 |
0.406 | 2.1827 | 13000 | 0.4935 | 0.8233 | 0.8442 | 0.8035 | 0.8220 | 0.9000 | 0.8729 |
0.3958 | 2.2666 | 13500 | 0.4891 | 0.8427 | 0.8172 | 0.8699 | 0.8324 | 0.8896 | 0.8443 |
0.4353 | 2.3506 | 14000 | 0.4666 | 0.8442 | 0.8154 | 0.8751 | 0.8333 | 0.8995 | 0.8745 |
Framework versions
- Transformers 4.42.3
- Pytorch 2.3.0+cu121
- Datasets 2.18.0
- Tokenizers 0.19.0
- Downloads last month
- 3
Model tree for tanoManzo/gena-lm-bert-base-t2t-multi_ft_BioS2_1kbpHG19_DHSs_H3K27AC
Base model
AIRI-Institute/gena-lm-bert-base-t2t-multi