Adding Evaluation Results (#1)
Browse files- Adding Evaluation Results (db78c934c75e4ec7acf5c5f4a6bff5e6b64def09)
Co-authored-by: Open LLM Leaderboard PR Bot <[email protected]>
README.md
CHANGED
@@ -3,12 +3,12 @@ license: apache-2.0
|
|
3 |
library_name: peft
|
4 |
tags:
|
5 |
- generated_from_trainer
|
|
|
|
|
6 |
base_model: upstage/SOLAR-10.7B-v1.0
|
7 |
model-index:
|
8 |
- name: outputs
|
9 |
results: []
|
10 |
-
datasets:
|
11 |
-
- sr5434/CodegebraGPT_data
|
12 |
---
|
13 |
|
14 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
@@ -49,4 +49,17 @@ The following hyperparameters were used during training:
|
|
49 |
- Transformers 4.36.2
|
50 |
- Pytorch 2.0.1
|
51 |
- Datasets 2.16.0
|
52 |
-
- Tokenizers 0.15.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
library_name: peft
|
4 |
tags:
|
5 |
- generated_from_trainer
|
6 |
+
datasets:
|
7 |
+
- sr5434/CodegebraGPT_data
|
8 |
base_model: upstage/SOLAR-10.7B-v1.0
|
9 |
model-index:
|
10 |
- name: outputs
|
11 |
results: []
|
|
|
|
|
12 |
---
|
13 |
|
14 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
49 |
- Transformers 4.36.2
|
50 |
- Pytorch 2.0.1
|
51 |
- Datasets 2.16.0
|
52 |
+
- Tokenizers 0.15.0
|
53 |
+
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
|
54 |
+
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_sr5434__CodegebraGPT-10b)
|
55 |
+
|
56 |
+
| Metric |Value|
|
57 |
+
|---------------------------------|----:|
|
58 |
+
|Avg. |62.68|
|
59 |
+
|AI2 Reasoning Challenge (25-Shot)|59.81|
|
60 |
+
|HellaSwag (10-Shot) |83.42|
|
61 |
+
|MMLU (5-Shot) |60.20|
|
62 |
+
|TruthfulQA (0-shot) |46.57|
|
63 |
+
|Winogrande (5-shot) |80.98|
|
64 |
+
|GSM8k (5-shot) |45.11|
|
65 |
+
|