mcanoglu commited on
Commit
bfaba8f
·
verified ·
1 Parent(s): cee87e9

End of training

Browse files
Files changed (2) hide show
  1. README.md +19 -14
  2. model.safetensors +1 -1
README.md CHANGED
@@ -19,11 +19,11 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [microsoft/codebert-base](https://huggingface.co/microsoft/codebert-base) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 0.5498
23
- - Accuracy: 0.7026
24
- - F1: 0.7299
25
- - Precision: 0.6559
26
- - Recall: 0.8227
27
 
28
  ## Model description
29
 
@@ -43,25 +43,30 @@ More information needed
43
 
44
  The following hyperparameters were used during training:
45
  - learning_rate: 2e-05
46
- - train_batch_size: 32
47
  - eval_batch_size: 8
48
  - seed: 4711
 
 
49
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
50
  - lr_scheduler_type: linear
51
- - num_epochs: 3
 
52
 
53
  ### Training results
54
 
55
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
56
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
57
- | 0.6584 | 1.0 | 997 | 0.5554 | 0.6827 | 0.6347 | 0.7252 | 0.5642 |
58
- | 0.5304 | 2.0 | 1994 | 0.5229 | 0.6975 | 0.7269 | 0.6502 | 0.8243 |
59
- | 0.4572 | 3.0 | 2991 | 0.5498 | 0.7026 | 0.7299 | 0.6559 | 0.8227 |
 
 
60
 
61
 
62
  ### Framework versions
63
 
64
- - Transformers 4.36.2
65
- - Pytorch 2.1.2+cu121
66
- - Datasets 2.16.1
67
- - Tokenizers 0.15.0
 
19
 
20
  This model is a fine-tuned version of [microsoft/codebert-base](https://huggingface.co/microsoft/codebert-base) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 0.6534
23
+ - Accuracy: 0.7342
24
+ - F1: 0.7413
25
+ - Precision: 0.7066
26
+ - Recall: 0.7795
27
 
28
  ## Model description
29
 
 
43
 
44
  The following hyperparameters were used during training:
45
  - learning_rate: 2e-05
46
+ - train_batch_size: 8
47
  - eval_batch_size: 8
48
  - seed: 4711
49
+ - gradient_accumulation_steps: 4
50
+ - total_train_batch_size: 32
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
+ - num_epochs: 5
54
+ - mixed_precision_training: Native AMP
55
 
56
  ### Training results
57
 
58
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
59
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
60
+ | 0.6396 | 1.0 | 996 | 0.5277 | 0.6905 | 0.6502 | 0.7258 | 0.5889 |
61
+ | 0.4862 | 2.0 | 1993 | 0.5331 | 0.7176 | 0.7393 | 0.6733 | 0.8196 |
62
+ | 0.4043 | 3.0 | 2989 | 0.5521 | 0.7339 | 0.7343 | 0.7167 | 0.7528 |
63
+ | 0.3439 | 4.0 | 3986 | 0.5945 | 0.7357 | 0.7422 | 0.7087 | 0.7790 |
64
+ | 0.2946 | 5.0 | 4980 | 0.6534 | 0.7342 | 0.7413 | 0.7066 | 0.7795 |
65
 
66
 
67
  ### Framework versions
68
 
69
+ - Transformers 4.37.2
70
+ - Pytorch 2.2.0+cu121
71
+ - Datasets 2.17.1
72
+ - Tokenizers 0.15.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c8e6af3703f588f4ac2fa2e5c0e510218610b9f63590bc4d4b0c4ad294a2f8d3
3
  size 498612824
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f57df9a5aabbb2a15b7f226c0c95adc1965af247e8b396bf28891b6c79a881a1
3
  size 498612824