Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 61.93 |
AI2 Reasoning Challenge (25-Shot) | 60.75 |
HellaSwag (10-Shot) | 84.64 |
MMLU (5-Shot) | 59.53 |
TruthfulQA (0-shot) | 63.31 |
Winogrande (5-shot) | 77.90 |
GSM8k (5-shot) | 25.47 |
- Downloads last month
- 88
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for vicgalle/zephyr-7b-truthy
Dataset used to train vicgalle/zephyr-7b-truthy
Spaces using vicgalle/zephyr-7b-truthy 6
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard60.750
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard84.640
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard59.530
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard63.310
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard77.900
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard25.470