nblinh63 commited on
Commit
3fe4e0d
·
verified ·
1 Parent(s): de82c56

End of training

Browse files
README.md CHANGED
@@ -47,7 +47,7 @@ flash_attention: true
47
  fp16: null
48
  fsdp: null
49
  fsdp_config: null
50
- gradient_accumulation_steps: 4
51
  gradient_checkpointing: true
52
  group_by_length: false
53
  hub_model_id: nblinh63/395bda4d-4035-47c7-9550-1f2977bb55d3
@@ -67,7 +67,7 @@ lora_r: 16
67
  lora_target_linear: true
68
  lr_scheduler: cosine
69
  max_steps: 10
70
- micro_batch_size: 2
71
  mlflow_experiment_name: /tmp/5238526693fb2e82_train_data.json
72
  model_type: AutoModelForCausalLM
73
  num_epochs: 1
@@ -105,7 +105,7 @@ xformers_attention: true
105
 
106
  This model is a fine-tuned version of [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) on the None dataset.
107
  It achieves the following results on the evaluation set:
108
- - Loss: 2.6994
109
 
110
  ## Model description
111
 
@@ -125,11 +125,9 @@ More information needed
125
 
126
  The following hyperparameters were used during training:
127
  - learning_rate: 0.0002
128
- - train_batch_size: 2
129
- - eval_batch_size: 2
130
  - seed: 42
131
- - gradient_accumulation_steps: 4
132
- - total_train_batch_size: 8
133
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
134
  - lr_scheduler_type: cosine
135
  - lr_scheduler_warmup_steps: 10
@@ -139,7 +137,7 @@ The following hyperparameters were used during training:
139
 
140
  | Training Loss | Epoch | Step | Validation Loss |
141
  |:-------------:|:------:|:----:|:---------------:|
142
- | 3.5197 | 0.0017 | 10 | 2.6994 |
143
 
144
 
145
  ### Framework versions
 
47
  fp16: null
48
  fsdp: null
49
  fsdp_config: null
50
+ gradient_accumulation_steps: 1
51
  gradient_checkpointing: true
52
  group_by_length: false
53
  hub_model_id: nblinh63/395bda4d-4035-47c7-9550-1f2977bb55d3
 
67
  lora_target_linear: true
68
  lr_scheduler: cosine
69
  max_steps: 10
70
+ micro_batch_size: 1
71
  mlflow_experiment_name: /tmp/5238526693fb2e82_train_data.json
72
  model_type: AutoModelForCausalLM
73
  num_epochs: 1
 
105
 
106
  This model is a fine-tuned version of [openlm-research/open_llama_3b](https://huggingface.co/openlm-research/open_llama_3b) on the None dataset.
107
  It achieves the following results on the evaluation set:
108
+ - Loss: 3.8126
109
 
110
  ## Model description
111
 
 
125
 
126
  The following hyperparameters were used during training:
127
  - learning_rate: 0.0002
128
+ - train_batch_size: 1
129
+ - eval_batch_size: 1
130
  - seed: 42
 
 
131
  - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
132
  - lr_scheduler_type: cosine
133
  - lr_scheduler_warmup_steps: 10
 
137
 
138
  | Training Loss | Epoch | Step | Validation Loss |
139
  |:-------------:|:------:|:----:|:---------------:|
140
+ | 4.4013 | 0.0002 | 10 | 3.8126 |
141
 
142
 
143
  ### Framework versions
adapter_config.json CHANGED
@@ -20,13 +20,13 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
 
23
  "down_proj",
 
24
  "gate_proj",
25
- "up_proj",
26
- "k_proj",
27
  "v_proj",
28
- "q_proj",
29
- "o_proj"
30
  ],
31
  "task_type": "CAUSAL_LM",
32
  "use_dora": false,
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
+ "o_proj",
24
  "down_proj",
25
+ "q_proj",
26
  "gate_proj",
 
 
27
  "v_proj",
28
+ "up_proj",
29
+ "k_proj"
30
  ],
31
  "task_type": "CAUSAL_LM",
32
  "use_dora": false,
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:58bd407d59ffd17bb51b587cb156ee300897c9168ffd7133b96b177af5404eec
3
  size 101834682
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cb082aff772c33d961204a180dafea9c358faadc0381d1f77f48dac5c5bfbb44
3
  size 101834682
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9d62f84e036feae092305c1701ec163765a2beb9754c18be5f494434f774bc47
3
  size 101752088
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1cf18823b417f9fd583037d8c61abe93adbec260a1295cea794603f0629a5cbb
3
  size 101752088
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:181320dbe3a06dc2543ec59d562e6f78b916b60af8e889d64a321965e18e5306
3
  size 6776
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14f5fe38cf0db07db6dd3e88fc90e63160bab952b33e1ba62b35683431cf5712
3
  size 6776