File size: 1,789 Bytes
92921c3 b5609ea 92921c3 b5609ea dc1416b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
---
license: apache-2.0
datasets:
- BelleGroup/train_2M_CN
- BelleGroup/train_3.5M_CN
- BelleGroup/train_1M_CN
- BelleGroup/train_0.5M_CN
- BelleGroup/school_math_0.25M
language:
- zh
---
## GoGPT
基于中文指令数据微调BLOOM
![img.png](resources/img.png)
> 训练第一轮足够了,后续第二轮和第三轮提升不大
- 🚀多样性指令数据
- 🚀筛选高质量中文数据
| 模型名字 | 参数量 | 模型地址 |
|------------|--------|------|
| gogpt-560m | 5.6亿参数 | 🤗[golaxy/gogpt-560m](https://huggingface.co/golaxy/gogpt-560m) |
| gogpt-3b | 30亿参数 | 🤗[golaxy/gogpt-3b](https://huggingface.co/golaxy/gogpt-3b) |
## 测试效果
![img.png](resources/test1.png)
![img.png](resources/test2.png)
![img.png](resources/test3.png)
![img.png](resources/test4.png)
![img.png](resources/test5.png)
![img.png](resources/test6.png)
## TODO
- 进行RLFH训练
- 后续加入中英平行语料
## 感谢
- [@hz大佬-zero_nlp](https://github.com/yuanzhoulvpi2017/zero_nlp)
- [stanford_alpaca](https://github.com/tatsu-lab/stanford_alpaca)
- [Belle数据](https://huggingface.co/BelleGroup)
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_golaxy__gogpt-560m)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | 26.3 |
| ARC (25-shot) | 26.37 |
| HellaSwag (10-shot) | 31.86 |
| MMLU (5-shot) | 25.29 |
| TruthfulQA (0-shot) | 43.12 |
| Winogrande (5-shot) | 50.75 |
| GSM8K (5-shot) | 0.0 |
| DROP (3-shot) | 6.7 |
|