jianqing666
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -6,18 +6,18 @@ language:
|
|
6 |
- en
|
7 |
---
|
8 |
|
9 |
-
# <b>MgGPT</b>
|
10 |
|
11 |
-
MgGPT is a fully fine-tuned generative text model collection based on LlaMA2, particularly in the
|
12 |
-
Arabic language domain. This is the repository for the version
|
13 |
|
14 |
---
|
15 |
## Model Details
|
16 |
-
We have released the MgGPT family of large language models, which is a collection of fully fine-tuned generative text models based on LlaMA2,
|
17 |
-
## Model Developers
|
18 |
-
We are from the King Abdullah University of Science and Technology (KAUST), the Chinese University of Hong Kong, Shenzhen (CUHKSZ), the Shenzhen Research Institute of Big Data (SRIBD), and King AbdulAziz University (KAU).
|
19 |
## Variations
|
20 |
-
MgGPT families come in a range of parameter sizes —— 7B
|
21 |
<!-- ## Paper -->
|
22 |
<!-- The paper can be accessed at [link](https://huggingface.co/FreedomIntelligence/AceGPT-v1.5-13B-Chat/blob/main/Second_Language_(Arabic)_Acquisition_of_LLMs_via_Progressive_Vocabulary_Expansion.pdf). -->
|
23 |
## Input
|
@@ -26,36 +26,17 @@ Models input text only.
|
|
26 |
Models output text only.
|
27 |
## Model Evaluation Results
|
28 |
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
|
33 |
-
|
|
34 |
-
| MgGPT-
|
35 |
-
|
|
36 |
-
| Jais-
|
37 |
-
|
|
38 |
-
|
|
39 |
-
| ChatGPT 3.5 Turbo | **43.38** | **44.12** | **55.57** | **53.21** | **49.07** |
|
40 |
|
41 |
-
<!-- | AceGPT-13B-base | 36.60 | 38.74 | 43.76 | <u>42.72</u> | 40.45 | -->
|
42 |
-
<!-- | AceGPT-7B-base | 29.73 | 30.95 | 33.45 | 34.42 | 32.14 | -->
|
43 |
-
|
44 |
-
|
45 |
-
Benchmark evaluation on [ArabicMMLU]((https://github.com/mbzuai-nlp/ArabicMMLU)), and assessed based on its source settings.
|
46 |
-
| | STEM | Social Sciences | Humanities | Arabic Language | Other | Average |
|
47 |
-
|------------------|------|------|------|------|------|------|
|
48 |
-
| Bloomz-7B-base | - | - | - | - | - | - |
|
49 |
-
| LLaMA2-7B-base | 33.7 | 32.8 | 33.5 | 28.4 | 36.7 | 33.4 |
|
50 |
-
| MgGPT-7B-base | 36.7 | 36.5 | 34.1 | 30.0 | 41.2 | 37.0 |
|
51 |
-
| LLaMA2-13B-base | 32.9 | 35.0 | 37.8 | 35.8 | 39.3 | 36.1 |
|
52 |
-
| Jais-13B-base | 30.3 | 31.4 | 33.6 | 28.1 | 36.3 | 32.2 |
|
53 |
-
| MgGPT-13B-base | 42.4 | <u>45.7</u> | 48.4 | <u>46.3</u> | <u>52.5</u> | <u>47.6</u> |
|
54 |
-
| Jais-30B-v1-base | 39.5 | 45.6 | <u>50.5</u> | 34.6 | 49.1 | 44.8 |
|
55 |
-
| ChatGPT 3.5 Turbo | **53.8** | **57.0** | **57.5** | **57.6** | **63.8** | **57.7** |
|
56 |
-
|
57 |
-
<!-- | AceGPT-7B-base | 35.4 | 35.9 | 36.2 | 31.1 | 41.7 | 36.3 |
|
58 |
-
| AceGPT-13B-base | <u>42.7</u> | 45.5 | 48.3 | 42.4 | 50.7 | 46.1 | -->
|
59 |
|
60 |
## Samples
|
61 |
#### Sample1(abstract_algebra)
|
|
|
6 |
- en
|
7 |
---
|
8 |
|
9 |
+
# <b>MgGPT-7B</b>
|
10 |
|
11 |
+
MgGPT-7B is a fully fine-tuned generative text model collection based on LlaMA2, particularly in the
|
12 |
+
Arabic language domain. This is the repository for the version of 7B pre-trained model.
|
13 |
|
14 |
---
|
15 |
## Model Details
|
16 |
+
We have released the MgGPT family of large language models, which is a collection of fully fine-tuned generative text models based on LlaMA2(MgGPT-7B, MgGPT-13B), LlaMA3(MgGPT-8B, MgGPT-70B), Qwen2(MgGPT-32B). Our models include two main categories: MgGPT and MgGPT-chat. MgGPT-chat is an optimized version specifically designed for dialogue applications. It is worth mentioning that our models have demonstrated superior performance compared to all currently available open-source Arabic dialogue models in multiple benchmark tests. Furthermore, in our human evaluations, our models have shown comparable satisfaction levels to some closed-source models, such as ChatGPT, in the Arabic language.
|
17 |
+
<!-- ## Model Developers
|
18 |
+
We are from the King Abdullah University of Science and Technology (KAUST), the Chinese University of Hong Kong, Shenzhen (CUHKSZ), the Shenzhen Research Institute of Big Data (SRIBD), and King AbdulAziz University (KAU). -->
|
19 |
## Variations
|
20 |
+
MgGPT families come in a range of parameter sizes —— 7B, 8B, 13B, 32B and 70B, each size of model has a base category and a -chat category.
|
21 |
<!-- ## Paper -->
|
22 |
<!-- The paper can be accessed at [link](https://huggingface.co/FreedomIntelligence/AceGPT-v1.5-13B-Chat/blob/main/Second_Language_(Arabic)_Acquisition_of_LLMs_via_Progressive_Vocabulary_Expansion.pdf). -->
|
23 |
## Input
|
|
|
26 |
Models output text only.
|
27 |
## Model Evaluation Results
|
28 |
|
29 |
+
| Model | Avg. | [ArabicMMLU]((https://github.com/mbzuai-nlp/ArabicMMLU)) | [ArabicMMLU]((https://github.com/mbzuai-nlp/ArabicMMLU)) | ARC | EXAMs | ACVA (clean) | ACVA (all) |
|
30 |
+
|---------------|--------|----------------|-----------------------|-------|-------|--------------|------------|
|
31 |
+
| **MgGPT-7B** | 45.19 | 34.03 | 37.00 | 17.49 | 37.28 | 72.69 | 72.67 |
|
32 |
+
| MgGPT-8B | 58.94 | 48.41 | 50.17 | 49.91 | 46.15 | 80.14 | 78.84 |
|
33 |
+
| MgGPT-13B | 52.11 | 40.95 | 47.60 | 31.57 | 35.10 | 79.45 | 78.01 |
|
34 |
+
| MgGPT-32B | 68.75 | 58.71 | 65.67 | 71.69 | 52.74 | 82.66 | 81.04 |
|
35 |
+
| MgGPT-70B | 72.62 | 65.19 | 67.71 | 80.93 | 56.19 | 84.79 | 80.93 |
|
36 |
+
| Jais-30B-v3 | 57.02 | 43.42 | 44.47 | 45.56 | 45.70 | 83.39 | 79.51 |
|
37 |
+
| GPT-3.5 | 60.71 | 49.07 | 57.70 | 60.24 | 45.93 | 74.45 | 76.88 |
|
38 |
+
| GPT-4 | 74.08 | 65.06 | 72.50 | 85.67 | 57.76 | 84.06 | 79.43 |
|
|
|
39 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
|
41 |
## Samples
|
42 |
#### Sample1(abstract_algebra)
|