shenzhi-wang commited on
Commit
1393362
·
verified ·
1 Parent(s): 3f48ebf

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +112 -3
README.md CHANGED
@@ -1,3 +1,112 @@
1
- ---
2
- license: llama3.1
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.1
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
+ base_model: meta-llama/Meta-Llama-3.1-70B-Instruct
6
+ language:
7
+ - en
8
+ - zh
9
+ tags:
10
+ - llama-factory
11
+ - orpo
12
+ ---
13
+
14
+ > [!CAUTION]
15
+ > For optimal performance, we refrain from fine-tuning the model's identity. Thus, inquiries such as "Who are you" or "Who developed you" may yield random responses that are not necessarily accurate.
16
+
17
+
18
+ # Updates
19
+
20
+ - 🚀🚀🚀 [July 24, 2024] We now introduce [shenzhi-wang/Llama3.1-70B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3.1-70B-Chinese-Chat)! Compared to the original [Meta-Llama-3.1-70B-Instruct model](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct), our llama3.1-70B-Chinese-Chat model significantly reduces the issues of "Chinese questions with English answers" and the mixing of Chinese and English in responses. The training dataset contains >100K preference pairs, and it exhibits significant enhancements, especially in **roleplay**, **function calling**, and **math** capabilities!
21
+ - 🔥 We provide the official **q4_k_m, q8_0, and f16 GGUF** versions of Llama3.1-70B-Chinese-Chat at https://huggingface.co/shenzhi-wang/Llama3.1-70B-Chinese-Chat/tree/main/gguf!
22
+
23
+
24
+ # Model Summary
25
+
26
+ llama3.1-70B-Chinese-Chat is an instruction-tuned language model for Chinese & English users with various abilities such as roleplaying & tool-using built upon the Meta-Llama-3.1-70B-Instruct model.
27
+
28
+ Developers: [Shenzhi Wang](https://shenzhi-wang.netlify.app)\*, [Yaowei Zheng](https://github.com/hiyouga)\*, Guoyin Wang (in.ai), Shiji Song, Gao Huang. (\*: Equal Contribution)
29
+
30
+ - License: [Llama-3.1 License](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B/blob/main/LICENSE)
31
+ - Base Model: Meta-Llama-3.1-70B-Instruct
32
+ - Model Size: 8.03B
33
+ - Context length: 128K (reported by [Meta-Llama-3.1-70B-Instruct model](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct), untested for our Chinese model)
34
+
35
+ # 1. Introduction
36
+
37
+ This is the first model specifically fine-tuned for Chinese & English users based on the [Meta-Llama-3.1-70B-Instruct model](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct). The fine-tuning algorithm used is ORPO [1].
38
+
39
+ **Compared to the original [Meta-Llama-3.1-70B-Instruct model](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct), our llama3.1-70B-Chinese-Chat model significantly reduces the issues of "Chinese questions with English answers" and the mixing of Chinese and English in responses.**
40
+
41
+
42
+ [1] Hong, Jiwoo, Noah Lee, and James Thorne. "Reference-free Monolithic Preference Optimization with Odds Ratio." arXiv preprint arXiv:2403.07691 (2024).
43
+
44
+ Training framework: [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory).
45
+
46
+ Training details:
47
+
48
+ - epochs: 3
49
+ - learning rate: 3e-6
50
+ - learning rate scheduler type: cosine
51
+ - Warmup ratio: 0.1
52
+ - cutoff len (i.e. context length): 8192
53
+ - orpo beta (i.e. $\lambda$ in the ORPO paper): 0.05
54
+ - global batch size: 128
55
+ - fine-tuning type: full parameters
56
+ - optimizer: paged_adamw_32bit
57
+
58
+
59
+
60
+ # 2. Usage
61
+
62
+ ## 2.1 Usage of Our BF16 Model
63
+
64
+ 1. Please upgrade the `transformers` package to ensure it supports Llama3.1 models. The current version we are using is `4.43.0`.
65
+
66
+ 2. Use the following Python script to download our BF16 model
67
+
68
+ ```python
69
+ from huggingface_hub import snapshot_download
70
+ snapshot_download(repo_id="shenzhi-wang/Llama3.1-70B-Chinese-Chat", ignore_patterns=["*.gguf"]) # Download our BF16 model without downloading GGUF models.
71
+ ```
72
+
73
+ 3. Inference with the BF16 model
74
+
75
+ ```python
76
+ import torch
77
+ import transformers
78
+ from transformers import AutoModelForCausalLM, AutoTokenizer
79
+
80
+ model_id = "/Your/Local/Path/to/Llama3.1-70B-Chinese-Chat"
81
+ dtype = torch.bfloat16
82
+
83
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
84
+ model = AutoModelForCausalLM.from_pretrained(
85
+ model_id,
86
+ device_map="cuda",
87
+ torch_dtype=dtype,
88
+ )
89
+
90
+ chat = [
91
+ {"role": "user", "content": "写一首关于机器学习的诗。"},
92
+ ]
93
+ input_ids = tokenizer.apply_chat_template(
94
+ chat, tokenize=True, add_generation_prompt=True, return_tensors="pt"
95
+ ).to(model.device)
96
+
97
+ outputs = model.generate(
98
+ input_ids,
99
+ max_new_tokens=8192,
100
+ do_sample=True,
101
+ temperature=0.6,
102
+ top_p=0.9,
103
+ )
104
+ response = outputs[0][input_ids.shape[-1] :]
105
+ print(tokenizer.decode(response, skip_special_tokens=True))
106
+ ```
107
+
108
+ ## 2.2 Usage of Our GGUF Models
109
+
110
+ 1. Download our GGUF models from the [gguf_models folder](https://huggingface.co/shenzhi-wang/Llama3.1-70B-Chinese-Chat/tree/main/gguf);
111
+ 2. Use the GGUF models with [LM Studio](https://lmstudio.ai/);
112
+ 3. You can also follow the instructions from https://github.com/ggerganov/llama.cpp/tree/master#usage to use gguf models.