Update README.md
Browse files
README.md
CHANGED
@@ -5,7 +5,7 @@ language:
|
|
5 |
- en
|
6 |
tags:
|
7 |
- Open-platypus-Commercial
|
8 |
-
base_model:
|
9 |
datasets:
|
10 |
- kyujinpy/Open-platypus-Commercial
|
11 |
model-index:
|
@@ -14,9 +14,9 @@ model-index:
|
|
14 |
---
|
15 |
Update @ 2024.03.07
|
16 |
|
17 |
-
## T3Q-Platypus-
|
18 |
|
19 |
-
This model is a fine-tuned version of
|
20 |
|
21 |
**Model Developers** Chihoon Lee(chlee10), T3Q
|
22 |
|
@@ -40,7 +40,7 @@ The following hyperparameters were used during training:
|
|
40 |
weight_decay = 0.01
|
41 |
max_grad_norm = 1.0
|
42 |
|
43 |
-
# LoRA config
|
44 |
lora_r = 16
|
45 |
lora_alpha = 16
|
46 |
lora_dropout = 0.05
|
|
|
5 |
- en
|
6 |
tags:
|
7 |
- Open-platypus-Commercial
|
8 |
+
base_model: liminerity/M7-7b
|
9 |
datasets:
|
10 |
- kyujinpy/Open-platypus-Commercial
|
11 |
model-index:
|
|
|
14 |
---
|
15 |
Update @ 2024.03.07
|
16 |
|
17 |
+
## T3Q-Platypus-MistralM7-7B
|
18 |
|
19 |
+
This model is a fine-tuned version of liminerity/M7-7b
|
20 |
|
21 |
**Model Developers** Chihoon Lee(chlee10), T3Q
|
22 |
|
|
|
40 |
weight_decay = 0.01
|
41 |
max_grad_norm = 1.0
|
42 |
|
43 |
+
# Q-LoRA config
|
44 |
lora_r = 16
|
45 |
lora_alpha = 16
|
46 |
lora_dropout = 0.05
|