merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
slices:
- Sources:
- model: bamec66557/MNRP_0.5
layer_range: [0, 40]
- model: bamec66557/MISCHIEVOUS-12B
layer_range: [0, 40]
parameters:
t:
- Filter: self_attn
value: [0.2, 0.4, 0.6, 0.8, 1.0]
- filter: mlp
value: [0.8, 0.6, 0.4, 0.2, 0.0]
- filter: layer_norm
value: [0.5, 0.5, 0.5, 0.5, 0.5, 0.5]
- value: 0.7
merge_method: slerp
base_model: bamec66557/MISCHIEVOUS-12B
dtype: bfloat16
regularisation:
- method: l2_norm
scale: 0.01
postprocessing:
- operation: smoothing
kernel_size: 3
- operation: normalise
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric |
Value |
Avg. |
22.52 |
IFEval (0-Shot) |
36.36 |
BBH (3-Shot) |
34.36 |
MATH Lvl 5 (4-Shot) |
12.76 |
GPQA (0-shot) |
10.40 |
MuSR (0-shot) |
11.54 |
MMLU-PRO (5-shot) |
29.71 |