Update README.md
Browse files
README.md
CHANGED
@@ -46,7 +46,8 @@ Using Magpie, three-round dialogue data was synthesized for tasks including:
|
|
46 |
- **Advice-seeking**: Offers thoughtful advice and guidance, helping users address personal, professional, or life challenges.
|
47 |
- **Brainstorming**: Generates ideas and fosters creative thinking, assisting users in exploring possibilities and proposing innovative concepts.
|
48 |
|
49 |
-
####
|
|
|
50 |
|
51 |
Using Magpie, one-round dialogue tasks were synthesized for:
|
52 |
|
@@ -81,7 +82,7 @@ The construction of the **smoltalk-chinese** dataset adheres to strict standards
|
|
81 |
|
82 |
#### **Deduplication**
|
83 |
|
84 |
-
- The **gte-large-zh** model encoded the first instruction in the
|
85 |
|
86 |
#### **Task Type and Text Length Statistics**
|
87 |
|
@@ -99,7 +100,7 @@ To verify the fine-tuning effectiveness of the **smoltalk-chinese** dataset, the
|
|
99 |
The base model used was **opencsg/csg-wukong-ablation-chinese-fineweb-edu** (a 2B model pretrained on the **chinese-fineweb-edu** dataset).
|
100 |
|
101 |
2. **Fine-tuning Process**
|
102 |
-
Fine-tuning was performed on
|
103 |
|
104 |
- **Epochs**: 2
|
105 |
- **Learning Rate**: 3e-4
|
@@ -107,7 +108,7 @@ To verify the fine-tuning effectiveness of the **smoltalk-chinese** dataset, the
|
|
107 |
- **Global Batch Size**: 32
|
108 |
|
109 |
3. **Evaluation Results**
|
110 |
-
The model's Chinese conversational capabilities were evaluated on
|
111 |
|
112 |
| Dataset | Professional Skills | Chinese Comprehension | Basic Tasks | Math Calculation | Text Writing | General Q&A | Role Playing | Logical Reasoning | Chinese Reasoning | Chinese Language | Total Score |
|
113 |
| ----------------------------- | ------------------- | --------------------- | ----------- | ---------------- | ------------ | ----------- | ------------ | ----------------- | ----------------- | ---------------- | ----------- |
|
@@ -228,14 +229,14 @@ smoltalk-chinese 数据集的构建过程严格遵循高标准,确保数据的
|
|
228 |
|
229 |
### **微调过程**
|
230 |
|
231 |
-
分别在 smoltalk-chinese 和 infinity-instruct 数据集(选取7M和Gen的中文部分,约1M
|
232 |
|
233 |
- **Epochs**: 2
|
234 |
- **Learning Rate**: 3e-4
|
235 |
- **Scheduler**: Cosine decay
|
236 |
- **Global Batch Size**: 32
|
237 |
|
238 |
-
在
|
239 |
|
240 |
| 数据集 | 专业能力 | 中文理解 | 基本任务 | 数学计算 | 文本写作 | 综合问答 | 角色扮演 | 逻辑推理 | 中文推理 | 中文语言 | 总分 |
|
241 |
| ----------------------------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | ---- |
|
|
|
46 |
- **Advice-seeking**: Offers thoughtful advice and guidance, helping users address personal, professional, or life challenges.
|
47 |
- **Brainstorming**: Generates ideas and fosters creative thinking, assisting users in exploring possibilities and proposing innovative concepts.
|
48 |
|
49 |
+
####
|
50 |
+
**2. Additional Tasks Referenced from SmolTalk**
|
51 |
|
52 |
Using Magpie, one-round dialogue tasks were synthesized for:
|
53 |
|
|
|
82 |
|
83 |
#### **Deduplication**
|
84 |
|
85 |
+
- The **gte-large-zh** model encoded the first instruction in the conversation data. Deduplication was performed based on embedding similarity (threshold set at 0.8), ensuring the diversity of the data.
|
86 |
|
87 |
#### **Task Type and Text Length Statistics**
|
88 |
|
|
|
100 |
The base model used was **opencsg/csg-wukong-ablation-chinese-fineweb-edu** (a 2B model pretrained on the **chinese-fineweb-edu** dataset).
|
101 |
|
102 |
2. **Fine-tuning Process**
|
103 |
+
Fine-tuning was performed on **smoltalk-chinese**, **Magpie-Qwen2-Pro-200K-Chinese** and **infinity-instruct** datasets (selecting 7M entries and the Chinese subset of approximately 1M entries), with the following settings:
|
104 |
|
105 |
- **Epochs**: 2
|
106 |
- **Learning Rate**: 3e-4
|
|
|
108 |
- **Global Batch Size**: 32
|
109 |
|
110 |
3. **Evaluation Results**
|
111 |
+
The model's Chinese conversational capabilities were evaluated on [**Alignbench**](https://github.com/THUDM/AlignBench). Results demonstrated significant advantages for the model fine-tuned on the **smoltalk-chinese** dataset across multiple metrics, confirming the dataset's effectiveness in improving Chinese language model performance.
|
112 |
|
113 |
| Dataset | Professional Skills | Chinese Comprehension | Basic Tasks | Math Calculation | Text Writing | General Q&A | Role Playing | Logical Reasoning | Chinese Reasoning | Chinese Language | Total Score |
|
114 |
| ----------------------------- | ------------------- | --------------------- | ----------- | ---------------- | ------------ | ----------- | ------------ | ----------------- | ----------------- | ---------------- | ----------- |
|
|
|
229 |
|
230 |
### **微调过程**
|
231 |
|
232 |
+
分别在 smoltalk-chinese 和 Magpie-Qwen2-Pro-200K-Chinese 和 infinity-instruct 数据集(选取7M和Gen的中文部分,约1M条)上进行微调,训练设置为
|
233 |
|
234 |
- **Epochs**: 2
|
235 |
- **Learning Rate**: 3e-4
|
236 |
- **Scheduler**: Cosine decay
|
237 |
- **Global Batch Size**: 32
|
238 |
|
239 |
+
在 [**Alignbench**](https://github.com/THUDM/AlignBench) 上评估模型的中文对话能力,结果表明,基于 smoltalk-chinese 微调的模型在多个指标上表现出显著优势,验证了 smoltalk-chinese 数据集在提升中文语言模型表现方面的有效性。
|
240 |
|
241 |
| 数据集 | 专业能力 | 中文理解 | 基本任务 | 数学计算 | 文本写作 | 综合问答 | 角色扮演 | 逻辑推理 | 中文推理 | 中文语言 | 总分 |
|
242 |
| ----------------------------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | ---- |
|