gimarchetti commited on
Commit
fbfb3e7
·
verified ·
1 Parent(s): 27bfc59

End of training

Browse files
Files changed (3) hide show
  1. README.md +12 -12
  2. adapter_model.safetensors +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -15,7 +15,7 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
- - Loss: 0.8436
19
 
20
  ## Model description
21
 
@@ -34,28 +34,28 @@ More information needed
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
- - learning_rate: 0.0001
38
  - train_batch_size: 8
39
  - eval_batch_size: 8
40
  - seed: 42
41
- - gradient_accumulation_steps: 20
42
- - total_train_batch_size: 160
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - lr_scheduler_warmup_steps: 50
46
- - num_epochs: 4
47
 
48
  ### Training results
49
 
50
  | Training Loss | Epoch | Step | Validation Loss |
51
  |:-------------:|:------:|:----:|:---------------:|
52
- | 1.0128 | 0.5263 | 100 | 0.7555 |
53
- | 0.6991 | 1.0526 | 200 | 0.7485 |
54
- | 0.5676 | 1.5789 | 300 | 0.7307 |
55
- | 0.5392 | 2.1053 | 400 | 0.7591 |
56
- | 0.4329 | 2.6316 | 500 | 0.7673 |
57
- | 0.4115 | 3.1579 | 600 | 0.8394 |
58
- | 0.3468 | 3.6842 | 700 | 0.8436 |
59
 
60
 
61
  ### Framework versions
 
15
 
16
  This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset.
17
  It achieves the following results on the evaluation set:
18
+ - Loss: 0.7025
19
 
20
  ## Model description
21
 
 
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
+ - learning_rate: 5e-05
38
  - train_batch_size: 8
39
  - eval_batch_size: 8
40
  - seed: 42
41
+ - gradient_accumulation_steps: 10
42
+ - total_train_batch_size: 80
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - lr_scheduler_warmup_steps: 50
46
+ - num_epochs: 2
47
 
48
  ### Training results
49
 
50
  | Training Loss | Epoch | Step | Validation Loss |
51
  |:-------------:|:------:|:----:|:---------------:|
52
+ | 1.1149 | 0.2632 | 100 | 0.7762 |
53
+ | 0.7216 | 0.5263 | 200 | 0.7406 |
54
+ | 0.7044 | 0.7895 | 300 | 0.7175 |
55
+ | 0.6617 | 1.0526 | 400 | 0.7204 |
56
+ | 0.5562 | 1.3158 | 500 | 0.7129 |
57
+ | 0.5614 | 1.5789 | 600 | 0.7067 |
58
+ | 0.5483 | 1.8421 | 700 | 0.7025 |
59
 
60
 
61
  ### Framework versions
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:222133dc6826cfa7840d853b55ae3269e003153bd46df1df306a7b37bec0d79c
3
  size 49840864
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:66488d3caa314b493b981149cce15df2a1cf4028be19f4af36e7edd5aa804e74
3
  size 49840864
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8c753ee3e6218ac056e93017f10a8af1ea46ffbd7e6d0eb87f331a1308430ff4
3
  size 4731
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:35f2f00eec80dbaddab6dae14a41bf3f87048361c8b13acd27dccbd15f87186a
3
  size 4731