Gemma-7B-slerp
This model is a merge of Gemma 7b base and 7b-instruct, using the Slerp merging method.
Test-7B-slerp is a merge of the following models using mergekit:
π Evaluation
Nous
Gemma-7B-slerp's Nous' benchmark suite (evaluation performed using LLM AutoEval).
Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
---|---|---|---|---|---|
arcee-ai/Gemma-7B-slerp π | 34.14 | 23.86 | 36.55 | 46.22 | 29.94 |
𧩠Configuration
Slerp YAML Config
slices:
- sources:
- model: google/gemma-7b-it
layer_range: [0, 28]
- model: google/gemma-7b
layer_range: [0, 28]
merge_method: slerp
base_model: google/gemma-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
- Downloads last month
- 17
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for arcee-ai/gemma-7b-slerp
Base model
google/gemma-7b