neginashz commited on
Commit
e76c59b
·
verified ·
1 Parent(s): d9a7011

Model save

Browse files
Files changed (1) hide show
  1. README.md +11 -8
README.md CHANGED
@@ -8,7 +8,7 @@ tags:
8
  datasets:
9
  - medalpaca/medical_meadow_medqa
10
  model-index:
11
- - name: qlora-qwen-25-7b-instruct
12
  results: []
13
  ---
14
 
@@ -55,7 +55,7 @@ wandb_log_model:
55
 
56
  gradient_accumulation_steps: 1
57
  micro_batch_size: 1
58
- num_epochs: 1
59
  optimizer: adamw_torch
60
  lr_scheduler: cosine
61
  learning_rate: 0.00002
@@ -103,23 +103,22 @@ wandb_watch:
103
  wandb_name:
104
  wandb_log_model:
105
 
106
- hub_model_id: neginashz/qlora-qwen-25-7b-instruct
107
  hub_strategy:
108
  early_stopping_patience:
109
 
110
  resume_from_checkpoint:
111
  auto_resume_from_checkpoints: true
112
- early_stopping_patience:
113
 
114
  ```
115
 
116
  </details><br>
117
 
118
- # qlora-qwen-25-7b-instruct
119
 
120
  This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the medalpaca/medical_meadow_medqa dataset.
121
  It achieves the following results on the evaluation set:
122
- - Loss: 0.1303
123
 
124
  ## Model description
125
 
@@ -148,8 +147,8 @@ The following hyperparameters were used during training:
148
  - total_eval_batch_size: 4
149
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
150
  - lr_scheduler_type: cosine
151
- - lr_scheduler_warmup_steps: 2
152
- - num_epochs: 1
153
 
154
  ### Training results
155
 
@@ -159,6 +158,10 @@ The following hyperparameters were used during training:
159
  | 0.1456 | 0.5 | 36 | 0.1333 |
160
  | 0.121 | 0.75 | 54 | 0.1312 |
161
  | 0.1328 | 1.0 | 72 | 0.1303 |
 
 
 
 
162
 
163
 
164
  ### Framework versions
 
8
  datasets:
9
  - medalpaca/medical_meadow_medqa
10
  model-index:
11
+ - name: qlora-qwen-25-7b-instruct-2
12
  results: []
13
  ---
14
 
 
55
 
56
  gradient_accumulation_steps: 1
57
  micro_batch_size: 1
58
+ num_epochs: 2
59
  optimizer: adamw_torch
60
  lr_scheduler: cosine
61
  learning_rate: 0.00002
 
103
  wandb_name:
104
  wandb_log_model:
105
 
106
+ hub_model_id: neginashz/qlora-qwen-25-7b-instruct-2
107
  hub_strategy:
108
  early_stopping_patience:
109
 
110
  resume_from_checkpoint:
111
  auto_resume_from_checkpoints: true
 
112
 
113
  ```
114
 
115
  </details><br>
116
 
117
+ # qlora-qwen-25-7b-instruct-2
118
 
119
  This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the medalpaca/medical_meadow_medqa dataset.
120
  It achieves the following results on the evaluation set:
121
+ - Loss: 0.1257
122
 
123
  ## Model description
124
 
 
147
  - total_eval_batch_size: 4
148
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
149
  - lr_scheduler_type: cosine
150
+ - lr_scheduler_warmup_steps: 4
151
+ - num_epochs: 2
152
 
153
  ### Training results
154
 
 
158
  | 0.1456 | 0.5 | 36 | 0.1333 |
159
  | 0.121 | 0.75 | 54 | 0.1312 |
160
  | 0.1328 | 1.0 | 72 | 0.1303 |
161
+ | 0.1336 | 1.25 | 90 | 0.1276 |
162
+ | 0.1228 | 1.5 | 108 | 0.1263 |
163
+ | 0.1199 | 1.75 | 126 | 0.1260 |
164
+ | 0.1393 | 2.0 | 144 | 0.1257 |
165
 
166
 
167
  ### Framework versions