Merge branch 'main' of https://huggingface.co/Xwin-LM/Xwin-Math-7B-V1.0 into main
Browse files
README.md
CHANGED
@@ -8,12 +8,12 @@ license: llama2
|
|
8 |
<a href="https://huggingface.co/Xwin-LM"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue"></a>
|
9 |
</p>
|
10 |
|
11 |
-
Xwin-Math is a series of powerful SFT LLMs for math
|
12 |
|
13 |
|
14 |
## 🔥 News
|
15 |
- 💥 [Nov, 2023] The [Xwin-Math-70B-V1.0](https://huggingface.co/Xwin-LM/Xwin-Math-70B-V1.0) model achieves **31.8 pass@1 on the MATH benchmark** and **87.0 pass@1 on the GSM8K benchmark**. This performance places it first amongst all open-source models!
|
16 |
-
- 💥 [Nov, 2023] The [Xwin-Math-7B-V1.0](https://huggingface.co/Xwin-LM/Xwin-Math-7B-V1.0) and [Xwin-Math-13B-V1.0](https://huggingface.co/Xwin-LM/Xwin-Math-13B-V1.0) models achieve **66.6 and 76.2 pass@1 on the GSM8K benchmark**, ranking as top-1 among all LLaMA-2 based 7B and 13B open-source models
|
17 |
|
18 |
|
19 |
## ✨ Model Card
|
@@ -52,11 +52,11 @@ Xwin-Math-70B-V1.0 has achieved **31.8% on MATH** and **87.0% on GSM8K**. These
|
|
52 |
| LEMAv1-7B | 10.0 | 54.7 |
|
53 |
|**Xwin-Math-7B-V1.0** | 17.4 | 66.6 |
|
54 |
|
55 |
-
We obtain these results using our flexible evaluation strategy. Due to differences in environment and hardware, the
|
56 |
|
57 |
### Xwin-Math performance on other math benchmarks.
|
58 |
|
59 |
-
Our 70B model shows strong mathematical
|
60 |
|
61 |
| **Model** | SVAMP | ASDiv | NumGlue | Algebra | MAWPS | **Average** |
|
62 |
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|
|
|
8 |
<a href="https://huggingface.co/Xwin-LM"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue"></a>
|
9 |
</p>
|
10 |
|
11 |
+
Xwin-Math is a series of powerful SFT LLMs for math problems based on LLaMA-2.
|
12 |
|
13 |
|
14 |
## 🔥 News
|
15 |
- 💥 [Nov, 2023] The [Xwin-Math-70B-V1.0](https://huggingface.co/Xwin-LM/Xwin-Math-70B-V1.0) model achieves **31.8 pass@1 on the MATH benchmark** and **87.0 pass@1 on the GSM8K benchmark**. This performance places it first amongst all open-source models!
|
16 |
+
- 💥 [Nov, 2023] The [Xwin-Math-7B-V1.0](https://huggingface.co/Xwin-LM/Xwin-Math-7B-V1.0) and [Xwin-Math-13B-V1.0](https://huggingface.co/Xwin-LM/Xwin-Math-13B-V1.0) models achieve **66.6 and 76.2 pass@1 on the GSM8K benchmark**, ranking as top-1 among all LLaMA-2 based 7B and 13B open-source models respectively!
|
17 |
|
18 |
|
19 |
## ✨ Model Card
|
|
|
52 |
| LEMAv1-7B | 10.0 | 54.7 |
|
53 |
|**Xwin-Math-7B-V1.0** | 17.4 | 66.6 |
|
54 |
|
55 |
+
We obtain these results using our flexible evaluation strategy. Due to differences in environment and hardware, the test results may be slightly different from the report, but we ensure that the evaluation is as accurate and fair as possible.
|
56 |
|
57 |
### Xwin-Math performance on other math benchmarks.
|
58 |
|
59 |
+
Our 70B model shows strong mathematical reasoning capabilities among all open-sourced models. Also note that our model even approaches or surpasses the performance of GPT-35-Turbo on some benchmarks.
|
60 |
|
61 |
| **Model** | SVAMP | ASDiv | NumGlue | Algebra | MAWPS | **Average** |
|
62 |
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|