merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the linear merge method.
Models Merged
The following models were included in the merge:
- DreadPoor/Heart_Stolen-8B-Model_Stock
- DreadPoor/Zelus_V2-8B-Model_Stock + grimjim/Llama-3-Instruct-abliteration-LoRA-8B
- DreadPoor/Aspire-8B-model_stock
Configuration
The following YAML configuration was used to produce this model:
models:
- model: DreadPoor/Aspire-8B-model_stock
parameters:
weight: 1.0
- model: DreadPoor/Zelus_V2-8B-Model_Stock+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
parameters:
weight: 1.0
- model: DreadPoor/Heart_Stolen-8B-Model_Stock
parameters:
weight: 1.0
merge_method: linear
normalize: false
int8_mask: true
dtype: bfloat16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here! Summarized results can be found here!
Metric | Value (%) |
---|---|
Average | 29.58 |
IFEval (0-Shot) | 73.78 |
BBH (3-Shot) | 34.23 |
MATH Lvl 5 (4-Shot) | 17.37 |
GPQA (0-shot) | 8.61 |
MuSR (0-shot) | 12.32 |
MMLU-PRO (5-shot) | 31.18 |
- Downloads last month
- 22
Model tree for DreadPoor/Howdy-8B-LINEAR
Evaluation results
- averaged accuracy on IFEval (0-Shot)Open LLM Leaderboard73.780
- normalized accuracy on BBH (3-Shot)test set Open LLM Leaderboard34.230
- exact match on MATH Lvl 5 (4-Shot)test set Open LLM Leaderboard17.370
- acc_norm on GPQA (0-shot)Open LLM Leaderboard8.610
- acc_norm on MuSR (0-shot)Open LLM Leaderboard12.320
- accuracy on MMLU-PRO (5-shot)test set Open LLM Leaderboard31.180