Text Generation
Transformers
Safetensors
English
Eval Results
Inference Endpoints
hawei commited on
Commit
14662b1
·
verified ·
1 Parent(s): d31d7a6

Add paper link

Browse files
Files changed (1) hide show
  1. README.md +117 -114
README.md CHANGED
@@ -1,114 +1,117 @@
1
- ---
2
- license: llama3.1
3
- datasets:
4
- - OpenCoder-LLM/opc-sft-stage1
5
- - OpenCoder-LLM/opc-sft-stage2
6
- language:
7
- - en
8
- base_model:
9
- - meta-llama/Llama-3.1-8B-Instruct
10
- model-index:
11
- - name: Control-LLM-Llama3.1-8B-OpenCoder8
12
- results:
13
- - task:
14
- type: code-evaluation
15
- dataset:
16
- type: mixed
17
- name: Code Evaluation Dataset
18
- metrics:
19
- - name: pass_at_1,n=1 (code_instruct)
20
- type: pass_at_1
21
- value: 0.770508826583593
22
- stderr: 0.013547264970313243
23
- verified: false
24
- - name: pass_at_1,n=1 (humaneval_greedy_instruct)
25
- type: pass_at_1
26
- value: 0.823170731707317
27
- stderr: 0.029883277857485988
28
- verified: false
29
- - name: pass_at_1,n=1 (humaneval_plus_greedy_instruct)
30
- type: pass_at_1
31
- value: 0.7621951219512195
32
- stderr: 0.033346454086653404
33
- verified: false
34
- - name: pass_at_1,n=1 (mbpp_plus_0shot_instruct)
35
- type: pass_at_1
36
- value: 0.7751322751322751
37
- stderr: 0.02150209607822914
38
- verified: false
39
- - name: pass_at_1,n=1 (mbpp_sanitized_0shot_instruct)
40
- type: pass_at_1
41
- value: 0.7354085603112841
42
- stderr: 0.027569713464529938
43
- verified: false
44
- - task:
45
- type: original-capability
46
- dataset:
47
- type: meta/Llama-3.1-8B-Instruct-evals
48
- name: Llama-3.1-8B-Instruct-evals Dataset
49
- dataset_path: "meta-llama/llama-3.1-8_b-instruct-evals"
50
- dataset_name: "Llama-3.1-8B-Instruct-evals__arc_challenge__details"
51
- metrics:
52
- - name: exact_match,strict-match (original_capability_instruct)
53
- type: exact_match
54
- value: 0.5599378769819771
55
- stderr: 0.0028491774433443513
56
- verified: false
57
- - name: exact_match,strict-match (meta_arc_0shot_instruct)
58
- type: exact_match
59
- value: 0.8094420600858369
60
- stderr: 0.011511446994122106
61
- verified: false
62
- - name: exact_match,strict-match (meta_gpqa_0shot_cot_instruct)
63
- type: exact_match
64
- value: 0.32589285714285715
65
- stderr: 0.02216910313464341
66
- verified: false
67
- - name: exact_match,strict-match (meta_mmlu_0shot_instruct)
68
- type: exact_match
69
- value: 0.681241988320752
70
- stderr: 0.003932622311434926
71
- verified: false
72
- - name: exact_match,strict-match (meta_mmlu_pro_5shot_instruct)
73
- type: exact_match
74
- value: 0.4029255319148936
75
- stderr: 0.004471732136513382
76
- verified: false
77
- ---
78
- # Control-LLM-Llama3.1-8B-OpenCoder8
79
- This is a fine-tuned model of Llama-3.1-8B-Instruct for coding tasks on OpenCoder SFT dataset.
80
-
81
- ## Evaluation Results
82
- Here is an overview of the evaluation results and findings:
83
-
84
- ### Hybrid Expansion on OpenCoder
85
- The following diagram illustrates how hybrid expansion works.
86
-
87
- ![Catastrophic Forgetting](plots/control_llm_structure_analysis.png)
88
-
89
- ### Benchmark Results Table
90
- The table below summarizes evaluation results across coding tasks and original capabilities.
91
-
92
- | **Model** | **MB+** | **MS** | **HE+** | **HE** | **C-Avg** | **ARC** | **GP** | **MLU** | **MLUP** | **O-Avg** | **Overall** |
93
- |--------------------|---------|---------|---------|---------|-----------|---------|---------|---------|----------|-----------|-------------|
94
- | Llama3.1-8B-Ins | 70.4 | 67.7 | 66.5 | 70.7 | 69.1 | 83.4 | 29.9 | 72.4 | 46.7 | 60.5 | 64.8 |
95
- | OpenCoder-8B-Ins | 81.2 | 76.3 | 78.0 | 82.3 | 79.5 | 8.2 | 25.4 | 37.4 | 11.3 | 24.6 | 52.1 |
96
- | Full Param Tune | 75.1 | 69.6 | 71.3 | 76.8 | 73.3 | 24.4 | 21.9 | 43.0 | 19.2 | 31.5 | 52.4 |
97
- | Partial Param Tune | 75.7 | 71.6 | 74.4 | 79.3 | 75.0 | 70.2 | 28.1 | 60.7 | 32.4 | 48.3 | 61.7 |
98
- | Stack Expansion | 77.2 | 72.8 | 73.2 | 78.7 | 75.6 | 80.0 | 26.3 | 66.6 | 38.2 | 54.2 | 64.9 |
99
- | **ControlLLM-Hybrid** | 77.5 | 73.5 | **76.2**| **82.3**| 77.1 | 80.9 | **32.6**| 68.1 | 40.3 | 56.0 | 66.6 |
100
-
101
- ---
102
-
103
- ### Explanation:
104
- - **MB+**: MBPP Plus
105
- - **MS**: MBPP Sanitized
106
- - **HE+**: HumanEval Plus
107
- - **HE**: HumanEval
108
- - **C-Avg**: Coding - Size Weighted Average across MB+, MS, HE+, and HE
109
- - **ARC**: ARC benchmark
110
- - **GP**: GPQA benchmark
111
- - **MLU**: MMLU (Massive Multitask Language Understanding)
112
- - **MLUP**: MMLU Pro
113
- - **O-Avg**: Original Capability - Size Weighted Average across ARC, GPQA, MMLU, and MMLU Pro
114
- - **Overall**: Combined average across all tasks
 
 
 
 
1
+ ---
2
+ license: llama3.1
3
+ datasets:
4
+ - OpenCoder-LLM/opc-sft-stage1
5
+ - OpenCoder-LLM/opc-sft-stage2
6
+ language:
7
+ - en
8
+ base_model:
9
+ - meta-llama/Llama-3.1-8B-Instruct
10
+ model-index:
11
+ - name: Control-LLM-Llama3.1-8B-OpenCoder8
12
+ results:
13
+ - task:
14
+ type: code-evaluation
15
+ dataset:
16
+ type: mixed
17
+ name: Code Evaluation Dataset
18
+ metrics:
19
+ - name: pass_at_1,n=1 (code_instruct)
20
+ type: pass_at_1
21
+ value: 0.770508826583593
22
+ stderr: 0.013547264970313243
23
+ verified: false
24
+ - name: pass_at_1,n=1 (humaneval_greedy_instruct)
25
+ type: pass_at_1
26
+ value: 0.823170731707317
27
+ stderr: 0.029883277857485988
28
+ verified: false
29
+ - name: pass_at_1,n=1 (humaneval_plus_greedy_instruct)
30
+ type: pass_at_1
31
+ value: 0.7621951219512195
32
+ stderr: 0.033346454086653404
33
+ verified: false
34
+ - name: pass_at_1,n=1 (mbpp_plus_0shot_instruct)
35
+ type: pass_at_1
36
+ value: 0.7751322751322751
37
+ stderr: 0.02150209607822914
38
+ verified: false
39
+ - name: pass_at_1,n=1 (mbpp_sanitized_0shot_instruct)
40
+ type: pass_at_1
41
+ value: 0.7354085603112841
42
+ stderr: 0.027569713464529938
43
+ verified: false
44
+ - task:
45
+ type: original-capability
46
+ dataset:
47
+ type: meta/Llama-3.1-8B-Instruct-evals
48
+ name: Llama-3.1-8B-Instruct-evals Dataset
49
+ dataset_path: "meta-llama/llama-3.1-8_b-instruct-evals"
50
+ dataset_name: "Llama-3.1-8B-Instruct-evals__arc_challenge__details"
51
+ metrics:
52
+ - name: exact_match,strict-match (original_capability_instruct)
53
+ type: exact_match
54
+ value: 0.5599378769819771
55
+ stderr: 0.0028491774433443513
56
+ verified: false
57
+ - name: exact_match,strict-match (meta_arc_0shot_instruct)
58
+ type: exact_match
59
+ value: 0.8094420600858369
60
+ stderr: 0.011511446994122106
61
+ verified: false
62
+ - name: exact_match,strict-match (meta_gpqa_0shot_cot_instruct)
63
+ type: exact_match
64
+ value: 0.32589285714285715
65
+ stderr: 0.02216910313464341
66
+ verified: false
67
+ - name: exact_match,strict-match (meta_mmlu_0shot_instruct)
68
+ type: exact_match
69
+ value: 0.681241988320752
70
+ stderr: 0.003932622311434926
71
+ verified: false
72
+ - name: exact_match,strict-match (meta_mmlu_pro_5shot_instruct)
73
+ type: exact_match
74
+ value: 0.4029255319148936
75
+ stderr: 0.004471732136513382
76
+ verified: false
77
+ ---
78
+ # Control-LLM-Llama3.1-8B-OpenCoder8
79
+ This is a fine-tuned model of Llama-3.1-8B-Instruct for coding tasks on OpenCoder SFT dataset.
80
+
81
+ ## Linked Paper
82
+ This model is associated with the paper: [Control-LLM](https://arxiv.org/abs/2501.10979).
83
+
84
+ ## Evaluation Results
85
+ Here is an overview of the evaluation results and findings:
86
+
87
+ ### Hybrid Expansion on OpenCoder
88
+ The following diagram illustrates how hybrid expansion works.
89
+
90
+ ![Catastrophic Forgetting](plots/control_llm_structure_analysis.png)
91
+
92
+ ### Benchmark Results Table
93
+ The table below summarizes evaluation results across coding tasks and original capabilities.
94
+
95
+ | **Model** | **MB+** | **MS** | **HE+** | **HE** | **C-Avg** | **ARC** | **GP** | **MLU** | **MLUP** | **O-Avg** | **Overall** |
96
+ |--------------------|---------|---------|---------|---------|-----------|---------|---------|---------|----------|-----------|-------------|
97
+ | Llama3.1-8B-Ins | 70.4 | 67.7 | 66.5 | 70.7 | 69.1 | 83.4 | 29.9 | 72.4 | 46.7 | 60.5 | 64.8 |
98
+ | OpenCoder-8B-Ins | 81.2 | 76.3 | 78.0 | 82.3 | 79.5 | 8.2 | 25.4 | 37.4 | 11.3 | 24.6 | 52.1 |
99
+ | Full Param Tune | 75.1 | 69.6 | 71.3 | 76.8 | 73.3 | 24.4 | 21.9 | 43.0 | 19.2 | 31.5 | 52.4 |
100
+ | Partial Param Tune | 75.7 | 71.6 | 74.4 | 79.3 | 75.0 | 70.2 | 28.1 | 60.7 | 32.4 | 48.3 | 61.7 |
101
+ | Stack Expansion | 77.2 | 72.8 | 73.2 | 78.7 | 75.6 | 80.0 | 26.3 | 66.6 | 38.2 | 54.2 | 64.9 |
102
+ | **ControlLLM-Hybrid** | 77.5 | 73.5 | **76.2**| **82.3**| 77.1 | 80.9 | **32.6**| 68.1 | 40.3 | 56.0 | 66.6 |
103
+
104
+ ---
105
+
106
+ ### Explanation:
107
+ - **MB+**: MBPP Plus
108
+ - **MS**: MBPP Sanitized
109
+ - **HE+**: HumanEval Plus
110
+ - **HE**: HumanEval
111
+ - **C-Avg**: Coding - Size Weighted Average across MB+, MS, HE+, and HE
112
+ - **ARC**: ARC benchmark
113
+ - **GP**: GPQA benchmark
114
+ - **MLU**: MMLU (Massive Multitask Language Understanding)
115
+ - **MLUP**: MMLU Pro
116
+ - **O-Avg**: Original Capability - Size Weighted Average across ARC, GPQA, MMLU, and MMLU Pro
117
+ - **Overall**: Combined average across all tasks