Oh commited on
Commit
085088e
·
1 Parent(s): 29cbd64

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -0
README.md CHANGED
@@ -1,3 +1,53 @@
1
  ---
2
  license: cc-by-nc-2.0
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-2.0
3
+ language:
4
+ - ko
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
  ---
8
+
9
+ **The license is `cc-by-nc-2.0`.**
10
+
11
+ # **GAI-LLM/ko-llama2-toy**
12
+
13
+ ## Model Details
14
+
15
+ **Model Developers** Donghoon Oh, Hanmin Myung, Eunyoung Kim (SK C&C G.AI Eng)
16
+
17
+ **Input** Models input text only.
18
+
19
+ **Output** Models generate text only.
20
+
21
+ **Model Architecture**
22
+ ko-en-llama2-13b-mixed-v4 is an auto-regressive language model based on the LLaMA2 transformer architecture.
23
+
24
+ **Base Model** [hyunseoki/ko-en-llama2-13b](https://huggingface.co/hyunseoki/ko-en-llama2-13b)
25
+
26
+ **Training Dataset**
27
+
28
+ We combined Open Korean Dateset.
29
+ We use A100 GPU 80GB * 8, when trianing.
30
+
31
+ # **Model Benchmark**
32
+
33
+ ## KO-LLM leaderboard
34
+ - Follow up as [Open KO-LLM LeaderBoard](https://huggingface.co/spaces/upstage/open-ko-llm-leaderboard).
35
+
36
+
37
+ # Implementation Code
38
+ ```python
39
+ ### GAI-LLM/ko-llama2-toy
40
+ from transformers import AutoModelForCausalLM, AutoTokenizer
41
+ import torch
42
+
43
+ repo = "GAI-LLM/ko-llama2-toy"
44
+ model = AutoModelForCausalLM.from_pretrained(
45
+ repo,
46
+ return_dict=True,
47
+ torch_dtype=torch.float16,
48
+ device_map='auto'
49
+ )
50
+ tokenizer = AutoTokenizer.from_pretrained(repo)
51
+ ```
52
+
53
+ ---