|
---
|
|
base_model: []
|
|
library_name: transformers
|
|
tags:
|
|
- mergekit
|
|
- merge
|
|
|
|
---
|
|
# Tiramisu-12B-v0.1k
|
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
|
|
|
## Merge Details
|
|
### Merge Method
|
|
|
|
This model was merged using the linear [DARE](https://arxiv.org/abs/2311.03099) merge method using D:/MLnonsense/models/flammenai_Mahou-1.3-mistral-nemo-12B as a base.
|
|
|
|
### Models Merged
|
|
|
|
The following models were included in the merge:
|
|
* D:/MLnonsense/models/nbeerbower_mistral-nemo-gutenberg-12B-v4
|
|
* D:/MLnonsense/models/Sao10K_MN-12B-Lyra-v1
|
|
* D:/MLnonsense/models/Gryphe_Pantheon-RP-1.5-12b-Nemo
|
|
|
|
### Configuration
|
|
|
|
The following YAML configuration was used to produce this model:
|
|
|
|
```yaml
|
|
base_model: D:/MLnonsense/models/flammenai_Mahou-1.3-mistral-nemo-12B
|
|
dtype: bfloat16
|
|
merge_method: dare_linear
|
|
slices:
|
|
- sources:
|
|
- layer_range: [0, 40]
|
|
model: D:/MLnonsense/models/Gryphe_Pantheon-RP-1.5-12b-Nemo
|
|
parameters:
|
|
weight: [0.45, 0.35, 0.35, 0.2, 0.2]
|
|
- layer_range: [0, 40]
|
|
model: D:/MLnonsense/models/Sao10K_MN-12B-Lyra-v1
|
|
parameters:
|
|
weight: [0.25, 0.3, 0.35, 0.3, 0.2]
|
|
- layer_range: [0, 40]
|
|
model: D:/MLnonsense/models/nbeerbower_mistral-nemo-gutenberg-12B-v4
|
|
parameters:
|
|
weight:
|
|
- filter: mlp
|
|
value: [0.1, 0.2, 0.1, 0.4, 0.5]
|
|
- value: [0.1, 0.2, 0.1, 0.2, 0.2]
|
|
- layer_range: [0, 40]
|
|
model: D:/MLnonsense/models/flammenai_Mahou-1.3-mistral-nemo-12B
|
|
parameters:
|
|
weight:
|
|
- filter: mlp
|
|
value: [0.2, 0.15, 0.2, 0.1, 0.1]
|
|
- value: [0.2, 0.15, 0.2, 0.3, 0.4]
|
|
tokenizer_source: union
|
|
```
|
|
|