File size: 1,380 Bytes
0175942 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
---
base_model:
- fblgit/cybertron-v4-qw7B-MGS
- bunnycore/QandoraExp-7B-Persona
- Qwen/Qwen2.5-7B
- rombodawg/Rombos-LLM-V2.5-Qwen-7b
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the della merge method using [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) as a base.
### Models Merged
The following models were included in the merge:
* [fblgit/cybertron-v4-qw7B-MGS](https://huggingface.co/fblgit/cybertron-v4-qw7B-MGS)
* [bunnycore/QandoraExp-7B-Persona](https://huggingface.co/bunnycore/QandoraExp-7B-Persona)
* [rombodawg/Rombos-LLM-V2.5-Qwen-7b](https://huggingface.co/rombodawg/Rombos-LLM-V2.5-Qwen-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: bunnycore/QandoraExp-7B-Persona
parameters:
weight: 0.2
density: 0.2
- model: rombodawg/Rombos-LLM-V2.5-Qwen-7b
parameters:
weight: 0.4
density: 0.4
lambda: 0.9
- model: fblgit/cybertron-v4-qw7B-MGS
parameters:
weight: 0.4
density: 0.4
lambda: 0.9
merge_method: della
base_model: Qwen/Qwen2.5-7B
parameters:
weight: 1
density: 1
lambda: 0.9
int8_mask: true
dtype: bfloat16
```
|