andthattoo
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -225,6 +225,6 @@ and the MMLU-Pro and DPAB results:
|
|
225 |
| Benchmark Name | Qwen2.5-Coder-7B-Instruct | Dria-Agent-α-7B |
|
226 |
|----------------|---------------------------|-----------------|
|
227 |
| MMLU-Pro | 45.6 ([Self Reported](https://arxiv.org/pdf/2409.12186)) | TBD |
|
228 |
-
| DPAB (Pythonic, Strict) |
|
229 |
|
230 |
**\*Note:** The model tends to use Pythonic function calling for a lot of the test cases in STEM-related fields (math, physics, chemistry, etc.) in the MMLU-Pro benchmark, which isn't captured by the evaluation framework and scripts provided in their [Github repository](https://github.com/TIGER-AI-Lab/MMLU-Pro/tree/main). We haven't modified the script for evaluation, and leave it for the future iterations of this model. However, by performing qualitative analysis on the model responses, we suspect that the model's score will increase instead of suffering a ~6% decrease.
|
|
|
225 |
| Benchmark Name | Qwen2.5-Coder-7B-Instruct | Dria-Agent-α-7B |
|
226 |
|----------------|---------------------------|-----------------|
|
227 |
| MMLU-Pro | 45.6 ([Self Reported](https://arxiv.org/pdf/2409.12186)) | TBD |
|
228 |
+
| DPAB (Pythonic, Strict) | 30.0 | 51.0 |
|
229 |
|
230 |
**\*Note:** The model tends to use Pythonic function calling for a lot of the test cases in STEM-related fields (math, physics, chemistry, etc.) in the MMLU-Pro benchmark, which isn't captured by the evaluation framework and scripts provided in their [Github repository](https://github.com/TIGER-AI-Lab/MMLU-Pro/tree/main). We haven't modified the script for evaluation, and leave it for the future iterations of this model. However, by performing qualitative analysis on the model responses, we suspect that the model's score will increase instead of suffering a ~6% decrease.
|