|
--- |
|
license: cc-by-nc-2.0 |
|
language: |
|
- ko |
|
library_name: transformers |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
**The license is `cc-by-nc-2.0`.** |
|
|
|
# **GAI-LLM/ko-en-llama2-13b-mixed-v3** |
|
|
|
## Model Details |
|
|
|
**Model Developers** Donghoon Oh, Hanmin Myung, Eunyoung Kim (SK C&C G.AI Eng) |
|
|
|
**Input** Models input text only. |
|
|
|
**Output** Models generate text only. |
|
|
|
**Model Architecture** |
|
ko-en-llama2-13b-mixed-v3 is an auto-regressive language model based on the LLaMA2 transformer architecture. |
|
|
|
**Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b) |
|
|
|
**Training Dataset** |
|
|
|
- We combined Open Korean Dateset using mixed-strategy. |
|
- Kopen-platypus + kaist_cot_deepL |
|
- We use A100 GPU 80GB * 8, when training. |
|
|
|
# **Model Benchmark** |
|
|
|
## KO-LLM leaderboard |
|
- Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard). |
|
|
|
|
|
# Implementation Code |
|
```python |
|
### GAI-LLM/ko-en-llama2-13b-mixed-v3 |
|
from transformers import AutoModelForCausalLM, AutoTokenizer |
|
import torch |
|
|
|
repo = "GAI-LLM/ko-en-llama2-13b-mixed-v3" |
|
model = AutoModelForCausalLM.from_pretrained( |
|
repo, |
|
return_dict=True, |
|
torch_dtype=torch.float16, |
|
device_map='auto' |
|
) |
|
tokenizer = AutoTokenizer.from_pretrained(repo) |
|
``` |
|
|