phunguyen01 commited on
Commit
74ba059
·
verified ·
1 Parent(s): a99d804

End of training

Browse files
Files changed (2) hide show
  1. README.md +109 -0
  2. pytorch_model.bin +2 -2
README.md ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: other
4
+ base_model: Qwen/Qwen2.5-3B
5
+ tags:
6
+ - axolotl
7
+ - generated_from_trainer
8
+ datasets:
9
+ - allenai/tulu-3-sft-mixture
10
+ model-index:
11
+ - name: II-Tulu-3B-SFT
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
19
+ <details><summary>See axolotl config</summary>
20
+
21
+ axolotl version: `0.5.3.dev0`
22
+ ```yaml
23
+ wandb_project: llm-training-platform
24
+ wandb_name: II-Tulu-3B-SFT
25
+ datasets:
26
+ - path: allenai/tulu-3-sft-mixture
27
+ split: train
28
+ type: chat_template
29
+ field_messages: messages
30
+ message_field_role: role
31
+ message_field_content: content
32
+ roles:
33
+ system:
34
+ - system
35
+ user:
36
+ - user
37
+ assistant:
38
+ - assistant
39
+ chat_template: qwen_25
40
+ sequence_len: 2048
41
+ base_model: Qwen/Qwen2.5-3B
42
+ output_dir: checkpoints/1357e2cd-76bc-46d5-a394-949b712427c7
43
+ dataset_prepared_path: checkpoints/1357e2cd-76bc-46d5-a394-949b712427c7/dataset_prepared
44
+ flash_attention: true
45
+ train_on_inputs: false
46
+ pad_to_sequence_len: true
47
+ eval_sample_packing: false
48
+ push_to_hub: true
49
+ bf16: auto
50
+ gradient_checkpointing: true
51
+ logging_steps: 10
52
+ hub_model_id: phunguyen01/II-Tulu-3B-SFT
53
+ learning_rate: 5.0e-06
54
+ micro_batch_size: 8
55
+ num_epochs: 2
56
+ seed: 42
57
+ gradient_accumulation_steps: 2
58
+ sample_packing: true
59
+ val_set_size: 0
60
+
61
+ ```
62
+
63
+ </details><br>
64
+
65
+ # II-Tulu-3B-SFT
66
+
67
+ This model is a fine-tuned version of [Qwen/Qwen2.5-3B](https://huggingface.co/Qwen/Qwen2.5-3B) on the allenai/tulu-3-sft-mixture dataset.
68
+
69
+ ## Model description
70
+
71
+ More information needed
72
+
73
+ ## Intended uses & limitations
74
+
75
+ More information needed
76
+
77
+ ## Training and evaluation data
78
+
79
+ More information needed
80
+
81
+ ## Training procedure
82
+
83
+ ### Training hyperparameters
84
+
85
+ The following hyperparameters were used during training:
86
+ - learning_rate: 5e-06
87
+ - train_batch_size: 8
88
+ - eval_batch_size: 8
89
+ - seed: 42
90
+ - distributed_type: multi-GPU
91
+ - num_devices: 8
92
+ - gradient_accumulation_steps: 2
93
+ - total_train_batch_size: 128
94
+ - total_eval_batch_size: 64
95
+ - optimizer: Use adamw_hf with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
96
+ - lr_scheduler_type: cosine
97
+ - lr_scheduler_warmup_steps: 100
98
+ - num_epochs: 2
99
+
100
+ ### Training results
101
+
102
+
103
+
104
+ ### Framework versions
105
+
106
+ - Transformers 4.47.0
107
+ - Pytorch 2.4.0+cu121
108
+ - Datasets 3.1.0
109
+ - Tokenizers 0.21.0
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5b36c4463e38fd93ea05021cc97bb325f52a520958205d413f3b9d9d691b8829
3
- size 115474
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4f8d57c832f15155a5987e47910978f8c853f9f3f1afc5b8cff2225eacc13433
3
+ size 610706