aithal commited on
Commit
f1f5d19
·
verified ·
1 Parent(s): a68517a

Model save

Browse files
README.md CHANGED
@@ -2,9 +2,6 @@
2
  base_model: meta-llama/Llama-2-7b-hf
3
  library_name: peft
4
  license: llama2
5
- metrics:
6
- - accuracy
7
- - rouge
8
  tags:
9
  - generated_from_trainer
10
  model-index:
@@ -19,11 +16,8 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 22.2205
23
- - Accuracy: 0.0
24
- - Rouge1: 0.0
25
- - Rouge2: 0.0
26
- - Rougel: 0.0
27
 
28
  ## Model description
29
 
@@ -43,14 +37,12 @@ More information needed
43
 
44
  The following hyperparameters were used during training:
45
  - learning_rate: 0.0002
46
- - train_batch_size: 2
47
- - eval_batch_size: 2
48
  - seed: 42
49
  - distributed_type: multi-GPU
50
- - num_devices: 2
51
- - gradient_accumulation_steps: 32
52
- - total_train_batch_size: 128
53
- - total_eval_batch_size: 4
54
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
55
  - lr_scheduler_type: linear
56
  - lr_scheduler_warmup_steps: 100
@@ -59,10 +51,6 @@ The following hyperparameters were used during training:
59
 
60
  ### Training results
61
 
62
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | Rouge1 | Rouge2 | Rougel |
63
- |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:------:|:------:|
64
- | 26.5755 | 1.4222 | 10 | 23.5886 | 0.0 | 0.0 | 0.0 | 0.0 |
65
- | 25.4247 | 2.8444 | 20 | 22.2188 | 0.0 | 0.0 | 0.0 | 0.0 |
66
 
67
 
68
  ### Framework versions
 
2
  base_model: meta-llama/Llama-2-7b-hf
3
  library_name: peft
4
  license: llama2
 
 
 
5
  tags:
6
  - generated_from_trainer
7
  model-index:
 
16
 
17
  This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 23.5375
20
+ - Model Preparation Time: 0.004
 
 
 
21
 
22
  ## Model description
23
 
 
37
 
38
  The following hyperparameters were used during training:
39
  - learning_rate: 0.0002
40
+ - train_batch_size: 4
41
+ - eval_batch_size: 4
42
  - seed: 42
43
  - distributed_type: multi-GPU
44
+ - gradient_accumulation_steps: 16
45
+ - total_train_batch_size: 64
 
 
46
  - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
  - lr_scheduler_type: linear
48
  - lr_scheduler_warmup_steps: 100
 
51
 
52
  ### Training results
53
 
 
 
 
 
54
 
55
 
56
  ### Framework versions
adapter_config.json CHANGED
@@ -20,10 +20,10 @@
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
23
- "v_proj",
24
- "q_proj",
25
  "k_proj",
26
- "o_proj"
 
 
27
  ],
28
  "task_type": "CAUSAL_LM",
29
  "use_dora": false,
 
20
  "rank_pattern": {},
21
  "revision": null,
22
  "target_modules": [
 
 
23
  "k_proj",
24
+ "q_proj",
25
+ "o_proj",
26
+ "v_proj"
27
  ],
28
  "task_type": "CAUSAL_LM",
29
  "use_dora": false,
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8326d3eed650affbe81ffa53bff6fc680ced295403e17ea8b55584c64e43add3
3
  size 33588528
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a2457729b52d52469a66d0f65048f18c84365279d82e4c680ed5e604178053a1
3
  size 33588528
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b897f974aa01181b03ec48fc3d8d4170320a1f49e8b062290a5a0cde4e885f93
3
  size 5304
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0599077019008d8ec165d91974bfe87bc63f25386042319df08054a1534a819
3
  size 5304