Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 61.93
AI2 Reasoning Challenge (25-Shot) 60.75
HellaSwag (10-Shot) 84.64
MMLU (5-Shot) 59.53
TruthfulQA (0-shot) 63.31
Winogrande (5-shot) 77.90
GSM8k (5-shot) 25.47
Downloads last month
88
Safetensors
Model size
7.24B params
Tensor type
FP16
Β·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for vicgalle/zephyr-7b-truthy

Quantizations
3 models

Dataset used to train vicgalle/zephyr-7b-truthy

Spaces using vicgalle/zephyr-7b-truthy 6

Evaluation results