Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,15 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- datajuicer/alpaca-cot-en-refined-by-data-juicer
|
5 |
---
|
6 |
+
|
7 |
+
This is a reference LLM from [Data-Juicer](https://github.com/alibaba/data-juicer).
|
8 |
+
|
9 |
+
The model architecture is LLaMA-7B and we built it upon the pre-trained [checkpoint](https://huggingface.co/huggyllama/llama-7b).
|
10 |
+
The model is fine-trained on 40k English chat samples of Data-Juicer's refined [alpaca-CoT data](https://github.com/alibaba/data-juicer/blob/main/configs/data_juicer_recipes/alpaca_cot/README.md#refined-alpaca-cot-dataset-meta-info).
|
11 |
+
It beats LLaMA-7B fine-tuned on 52k Alpaca samples in GPT-4 evaluation.
|
12 |
+
|
13 |
+
For more details, please refer to our [paper](https://arxiv.org/abs/2309.02033).
|
14 |
+
|
15 |
+
![exp_llama](https://img.alicdn.com/imgextra/i2/O1CN019WtUPP1uhebnDlPR8_!!6000000006069-2-tps-2530-1005.png)
|