hawei commited on
Commit
ba08dcc
·
verified ·
1 Parent(s): 71470b4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -14
README.md CHANGED
@@ -90,23 +90,22 @@ The plot below highlights the alignment comparison of the model trained with Con
90
  ### Benchmark Results Table
91
  The table below summarizes the evaluation results across mathematical tasks and original capabilities for various models and training approaches.
92
 
93
- | **Model** | **Math Tasks** | | | | | **Original Capabilities** | | | | **Overall Avg.** |
94
- |--------------------------|--------------------|--------------|----------|-----------|----------|---------------------------|---------|---------|-----------|------------------|
95
- | | **MathHard** | **Math** | **GSM8K**| **Avg.** | | **ARC** | **GPQA**| **MMLU**| **MMLUP** | |
96
- |--------------------------|--------------------|--------------|----------|-----------|----------|---------------------------|---------|---------|-----------|------------------|
97
- | Llama3.1-8B-Instruct | 23.7 | 50.9 | 85.6 | 52.1 | | 83.4 | 29.9 | 72.4 | 46.7 | 56.3 |
98
- | OpenMath2-Llama3.1 | 38.4 | 64.1 | 90.3 | 64.3 | | 45.8 | 1.3 | 4.5 | 19.5 | 38.6 |
99
- | **Full Param Tune** | **38.5** | **63.7** | 90.2 | **63.9** | | 58.2 | 1.1 | 7.3 | 23.5 | 40.1 |
100
- | Partial Param Tune | 36.4 | 61.4 | 89.0 | 61.8 | | 66.2 | 6.0 | 25.7 | 30.9 | 45.6 |
101
- | Stack Expansion | 35.6 | 61.0 | 90.8 | 61.8 | | 69.3 | 18.8 | 61.8 | 43.1 | 57.6 |
102
- | Hybrid Expansion | 34.4 | 61.1 | 90.1 | 61.5 | | **81.8** | **25.9**| 67.2 | **43.9** | 59.3 |
103
- | **Control LLM*** | 38.1 | 62.7 | **90.4** | 63.2 | | 79.7 | 25.2 | **68.1**| 43.6 | **60.2** |
 
 
104
 
105
  ### Explanation of Groups
106
  - **Math Tasks**:
107
  - Covers **MathHard**, **Math**, and **GSM8K**, measuring the model's performance on mathematical reasoning and problem-solving tasks.
108
  - **Original Capabilities**:
109
  - Includes **ARC**, **GPQA**, **MMLU**, and **MMLUP**, reflecting the model’s ability to handle general reasoning and knowledge benchmarks.
110
-
111
-
112
-
 
90
  ### Benchmark Results Table
91
  The table below summarizes the evaluation results across mathematical tasks and original capabilities for various models and training approaches.
92
 
93
+ | **Model** | **Math Tasks** | | | | **Original Capabilities** | | | | **Overall Avg.** |
94
+ |--------------------------|----------------------------|----------|-----------|----------|-----------------------------|---------|---------|-----------|------------------|
95
+ | | **MathHard** | **Math** | **GSM8K** | **Avg.** | **ARC** | **GPQA**| **MMLU**| **MMLUP** | |
96
+ |--------------------------|----------------------------|----------|-----------|----------|-----------------------------|---------|---------|-----------|------------------|
97
+ | Llama3.1-8B-Instruct | 23.7 | 50.9 | 85.6 | 52.1 | 83.4 | 29.9 | 72.4 | 46.7 | 56.3 |
98
+ | OpenMath2-Llama3.1 | 38.4 | 64.1 | 90.3 | 64.3 | 45.8 | 1.3 | 4.5 | 19.5 | 38.6 |
99
+ | **Full Param Tune** | **38.5** | **63.7** | 90.2 | **63.9** | 58.2 | 1.1 | 7.3 | 23.5 | 40.1 |
100
+ | Partial Param Tune | 36.4 | 61.4 | 89.0 | 61.8 | 66.2 | 6.0 | 25.7 | 30.9 | 45.6 |
101
+ | Stack Expansion | 35.6 | 61.0 | 90.8 | 61.8 | 69.3 | 18.8 | 61.8 | 43.1 | 57.6 |
102
+ | Hybrid Expansion | 34.4 | 61.1 | 90.1 | 61.5 | **81.8** | **25.9**| 67.2 | **43.9** | 59.3 |
103
+ | **Control LLM*** | 38.1 | 62.7 | **90.4** | 63.2 | 79.7 | 25.2 | **68.1**| 43.6 | **60.2** |
104
+
105
+ ---
106
 
107
  ### Explanation of Groups
108
  - **Math Tasks**:
109
  - Covers **MathHard**, **Math**, and **GSM8K**, measuring the model's performance on mathematical reasoning and problem-solving tasks.
110
  - **Original Capabilities**:
111
  - Includes **ARC**, **GPQA**, **MMLU**, and **MMLUP**, reflecting the model’s ability to handle general reasoning and knowledge benchmarks.