Update README.md
Browse files
README.md
CHANGED
@@ -8,7 +8,7 @@ pipeline_tag: text-generation
|
|
8 |
|
9 |
**The license is `cc-by-nc-2.0`.**
|
10 |
|
11 |
-
# **GAI-LLM/ko-llama2-
|
12 |
|
13 |
## Model Details
|
14 |
|
@@ -19,14 +19,14 @@ pipeline_tag: text-generation
|
|
19 |
**Output** Models generate text only.
|
20 |
|
21 |
**Model Architecture**
|
22 |
-
GAI-LLM/ko-llama2-
|
23 |
|
24 |
**Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b)
|
25 |
|
26 |
**Training Dataset**
|
27 |
|
28 |
- We combined Open Korean Dateset using mixed-strategy.
|
29 |
-
- Kopen-platypus + Everythinglm v2 + jojo0217/
|
30 |
- We use A100 GPU 80GB * 8, when training.
|
31 |
|
32 |
# **Model Benchmark**
|
|
|
8 |
|
9 |
**The license is `cc-by-nc-2.0`.**
|
10 |
|
11 |
+
# **GAI-LLM/ko-en-llama2-13b-mixed-v1**
|
12 |
|
13 |
## Model Details
|
14 |
|
|
|
19 |
**Output** Models generate text only.
|
20 |
|
21 |
**Model Architecture**
|
22 |
+
GAI-LLM/ko-en-llama2-13b-mixed-v1 is an auto-regressive language model based on the LLaMA2 transformer architecture.
|
23 |
|
24 |
**Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b)
|
25 |
|
26 |
**Training Dataset**
|
27 |
|
28 |
- We combined Open Korean Dateset using mixed-strategy.
|
29 |
+
- Kopen-platypus + Everythinglm v2 + jojo0217/korean_rlhf_dataset + sentineg + hellaswag + copa
|
30 |
- We use A100 GPU 80GB * 8, when training.
|
31 |
|
32 |
# **Model Benchmark**
|