aashish1904
commited on
Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,417 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
---
|
3 |
+
|
4 |
+
tags:
|
5 |
+
- merge
|
6 |
+
- mergekit
|
7 |
+
- lazymergekit
|
8 |
+
- not-for-all-audiences
|
9 |
+
- nsfw
|
10 |
+
- rp
|
11 |
+
- roleplay
|
12 |
+
- role-play
|
13 |
+
license: llama3
|
14 |
+
language:
|
15 |
+
- en
|
16 |
+
pipeline_tag: text-generation
|
17 |
+
base_model:
|
18 |
+
- Sao10K/L3-8B-Stheno-v3.2
|
19 |
+
- ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
|
20 |
+
- Nitral-AI/Hathor_Stable-v0.2-L3-8B
|
21 |
+
- NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
|
22 |
+
- Hastagaras/Jamet-8B-L3-MK.V-Blackroot
|
23 |
+
- openlynn/Llama-3-Soliloquy-8B-v2
|
24 |
+
- NousResearch/Meta-Llama-3-8B-Instruct
|
25 |
+
- turboderp/llama3-turbcat-instruct-8b
|
26 |
+
- VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
|
27 |
+
- TIGER-Lab/MAmmoTH2-8B-Plus
|
28 |
+
- jondurbin/bagel-8b-v1.0
|
29 |
+
- abacusai/Llama-3-Smaug-8B
|
30 |
+
- failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
|
31 |
+
- AwanLLM/Awanllm-Llama-3-8B-Cumulus-v1.0
|
32 |
+
- lodrick-the-lafted/Limon-8B
|
33 |
+
- vicgalle/Configurable-Llama-3-8B-v0.3
|
34 |
+
- Undi95/Llama3-Unholy-8B-OAS
|
35 |
+
- Undi95/Unholy-8B-DPO-OAS
|
36 |
+
|
37 |
+
---
|
38 |
+
|
39 |
+
![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)
|
40 |
+
|
41 |
+
# QuantFactory/L3-Scrambled-Eggs-On-Toast-8B-GGUF
|
42 |
+
This is quantized version of [Casual-Autopsy/L3-Scrambled-Eggs-On-Toast-8B](https://huggingface.co/Casual-Autopsy/L3-Scrambled-Eggs-On-Toast-8B) created using llama.cpp
|
43 |
+
|
44 |
+
# Original Model Card
|
45 |
+
|
46 |
+
|
47 |
+
# L3-Scrambled-Eggs-On-Toast-8B
|
48 |
+
|
49 |
+
**L3-Scrambled-Eggs-On-Toast-8B** is a role-play model merger **using 18 models** that was made **in 11 merging steps.**
|
50 |
+
|
51 |
+
The goal is to create both a creative and smart model by using gradients.
|
52 |
+
Each model has their own section in the gradient where they have a larger weight to promote intelligence whereas the rest of the models in the section of the gradient have a small weight to promote creativity.
|
53 |
+
|
54 |
+
The following models were used as inspiration:
|
55 |
+
* [grimjim/kunoichi-lemon-royale-v3-32K-7B](https://huggingface.co/grimjim/kunoichi-lemon-royale-v3-32K-7B)
|
56 |
+
* [invisietch/EtherealRainbow-v0.3-8B](https://huggingface.co/invisietch/EtherealRainbow-v0.3-8B)
|
57 |
+
* [PJMixers/LLaMa-3-CursedStock-v2.0-8B](https://huggingface.co/PJMixers/LLaMa-3-CursedStock-v2.0-8B)
|
58 |
+
|
59 |
+
## Instruct Format
|
60 |
+
|
61 |
+
Llama 3
|
62 |
+
|
63 |
+
## Settings/Presets
|
64 |
+
|
65 |
+
### Instruct/Context
|
66 |
+
|
67 |
+
Virt-io's [SillyTavern Presets](https://huggingface.co/Virt-io/SillyTavern-Presets/tree/main/Prompts/LLAMA-3/v1.9) is recommended.
|
68 |
+
|
69 |
+
### Sampler Settings
|
70 |
+
|
71 |
+
Here are the current recommended settings for **more creativity**
|
72 |
+
```
|
73 |
+
Top K: 60
|
74 |
+
Min P: 0.035
|
75 |
+
Rep Pen: 1.05
|
76 |
+
Rep Pen Range: 2048
|
77 |
+
Pres Pen: 0.15
|
78 |
+
Smoothing Factor: 0.25
|
79 |
+
Dyna Temp:
|
80 |
+
Min Temp: 0.75
|
81 |
+
Max Temp: 1.5
|
82 |
+
Expo: 0.85
|
83 |
+
```
|
84 |
+
|
85 |
+
if you want **more adherence**, then the **Naive preset** is recommended
|
86 |
+
|
87 |
+
## Quants
|
88 |
+
|
89 |
+
Weighted Quants by:
|
90 |
+
- [Lewdiculous](https://huggingface.co/LWDCLS/L3-Scrambled-Eggs-On-Toast-8B-GGUF-IQ-Imatrix-Request)
|
91 |
+
- [mradermacher](https://huggingface.co/mradermacher/L3-Scrambled-Eggs-On-Toast-8B-i1-GGUF)
|
92 |
+
|
93 |
+
Static Quants by:
|
94 |
+
- [mradermacher](https://huggingface.co/mradermacher/L3-Scrambled-Eggs-On-Toast-8B-GGUF)
|
95 |
+
|
96 |
+
# Secret Sauce
|
97 |
+
|
98 |
+
## Models Used
|
99 |
+
|
100 |
+
L3-Scrambled-Eggs-On-Toast-8B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
101 |
+
* [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2)
|
102 |
+
* [ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B)
|
103 |
+
* [Nitral-AI/Hathor_Stable-v0.2-L3-8B](https://huggingface.co/Nitral-AI/Hathor_Stable-v0.2-L3-8B)
|
104 |
+
* [NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS](https://huggingface.co/NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS)
|
105 |
+
* [Hastagaras/Jamet-8B-L3-MK.V-Blackroot](https://huggingface.co/Hastagaras/Jamet-8B-L3-MK.V-Blackroot)
|
106 |
+
* [openlynn/Llama-3-Soliloquy-8B-v2](https://huggingface.co/openlynn/Llama-3-Soliloquy-8B-v2)
|
107 |
+
* [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
|
108 |
+
* [turboderp/llama3-turbcat-instruct-8b](https://huggingface.co/turboderp/llama3-turbcat-instruct-8b)
|
109 |
+
* [VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct)
|
110 |
+
* [TIGER-Lab/MAmmoTH2-8B-Plus](https://huggingface.co/TIGER-Lab/MAmmoTH2-8B-Plus)
|
111 |
+
* [jondurbin/bagel-8b-v1.0](https://huggingface.co/jondurbin/bagel-8b-v1.0)
|
112 |
+
* [abacusai/Llama-3-Smaug-8B](https://huggingface.co/abacusai/Llama-3-Smaug-8B)
|
113 |
+
* [failspy/Meta-Llama-3-8B-Instruct-abliterated-v3](https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3)
|
114 |
+
* [AwanLLM/Awanllm-Llama-3-8B-Cumulus-v1.0](https://huggingface.co/AwanLLM/Awanllm-Llama-3-8B-Cumulus-v1.0)
|
115 |
+
* [lodrick-the-lafted/Limon-8B](https://huggingface.co/lodrick-the-lafted/Limon-8B)
|
116 |
+
* [vicgalle/Configurable-Llama-3-8B-v0.3](https://huggingface.co/vicgalle/Configurable-Llama-3-8B-v0.3)
|
117 |
+
* [Undi95/Llama3-Unholy-8B-OAS](https://huggingface.co/Undi95/Llama3-Unholy-8B-OAS)
|
118 |
+
* [Undi95/Unholy-8B-DPO-OAS](https://huggingface.co/Undi95/Unholy-8B-DPO-OAS)
|
119 |
+
|
120 |
+
## YAML Configs Used
|
121 |
+
|
122 |
+
The following YAML configs were used to make this mode
|
123 |
+
|
124 |
+
### Eggs-and-Bread-RP-pt.1
|
125 |
+
|
126 |
+
```yaml
|
127 |
+
models:
|
128 |
+
- model: Sao10K/L3-8B-Stheno-v3.2
|
129 |
+
- model: ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
|
130 |
+
parameters:
|
131 |
+
density: 0.5
|
132 |
+
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
|
133 |
+
- model: Nitral-AI/Hathor_Stable-v0.2-L3-8B
|
134 |
+
parameters:
|
135 |
+
density: 0.5
|
136 |
+
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
|
137 |
+
- model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
|
138 |
+
parameters:
|
139 |
+
density: 0.5
|
140 |
+
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
|
141 |
+
- model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
|
142 |
+
parameters:
|
143 |
+
density: 0.5
|
144 |
+
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
|
145 |
+
- model: openlynn/Llama-3-Soliloquy-8B-v2
|
146 |
+
parameters:
|
147 |
+
density: 0.5
|
148 |
+
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
|
149 |
+
merge_method: dare_ties
|
150 |
+
base_model: Sao10K/L3-8B-Stheno-v3.2
|
151 |
+
parameters:
|
152 |
+
normalize: false
|
153 |
+
int8_mask: true
|
154 |
+
dtype: bfloat16
|
155 |
+
```
|
156 |
+
|
157 |
+
### Eggs-and-Bread-RP-pt.2
|
158 |
+
|
159 |
+
```yaml
|
160 |
+
models:
|
161 |
+
- model: Sao10K/L3-8B-Stheno-v3.2
|
162 |
+
- model: ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
|
163 |
+
parameters:
|
164 |
+
gamma: 0.01
|
165 |
+
density: 0.9
|
166 |
+
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
|
167 |
+
- model: Nitral-AI/Hathor_Stable-v0.2-L3-8B
|
168 |
+
parameters:
|
169 |
+
gamma: 0.01
|
170 |
+
density: 0.9
|
171 |
+
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
|
172 |
+
- model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
|
173 |
+
parameters:
|
174 |
+
gamma: 0.01
|
175 |
+
density: 0.9
|
176 |
+
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
|
177 |
+
- model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
|
178 |
+
parameters:
|
179 |
+
gamma: 0.01
|
180 |
+
density: 0.9
|
181 |
+
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
|
182 |
+
- model: openlynn/Llama-3-Soliloquy-8B-v2
|
183 |
+
parameters:
|
184 |
+
gamma: 0.01
|
185 |
+
density: 0.9
|
186 |
+
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
|
187 |
+
merge_method: breadcrumbs_ties
|
188 |
+
base_model: Sao10K/L3-8B-Stheno-v3.2
|
189 |
+
parameters:
|
190 |
+
normalize: false
|
191 |
+
int8_mask: true
|
192 |
+
dtype: bfloat16
|
193 |
+
```
|
194 |
+
|
195 |
+
### Egg-and-Bread-RP
|
196 |
+
|
197 |
+
```yaml
|
198 |
+
models:
|
199 |
+
- model: Casual-Autopsy/Eggs-and-Bread-RP-pt.1
|
200 |
+
- model: Casual-Autopsy/Eggs-and-Bread-RP-pt.2
|
201 |
+
merge_method: slerp
|
202 |
+
base_model: Casual-Autopsy/Eggs-and-Bread-RP-pt.1
|
203 |
+
parameters:
|
204 |
+
t:
|
205 |
+
- filter: self_attn
|
206 |
+
value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
|
207 |
+
- filter: mlp
|
208 |
+
value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
|
209 |
+
- value: 0.5
|
210 |
+
dtype: bfloat16
|
211 |
+
```
|
212 |
+
|
213 |
+
### Eggs-and-Bread-IQ-pt.1
|
214 |
+
|
215 |
+
```yaml
|
216 |
+
models:
|
217 |
+
- model: NousResearch/Meta-Llama-3-8B-Instruct
|
218 |
+
- model: turboderp/llama3-turbcat-instruct-8b
|
219 |
+
parameters:
|
220 |
+
density: 0.5
|
221 |
+
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
|
222 |
+
- model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
|
223 |
+
parameters:
|
224 |
+
density: 0.5
|
225 |
+
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
|
226 |
+
- model: TIGER-Lab/MAmmoTH2-8B-Plus
|
227 |
+
parameters:
|
228 |
+
density: 0.5
|
229 |
+
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
|
230 |
+
- model: jondurbin/bagel-8b-v1.0
|
231 |
+
parameters:
|
232 |
+
density: 0.5
|
233 |
+
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
|
234 |
+
- model: abacusai/Llama-3-Smaug-8B
|
235 |
+
parameters:
|
236 |
+
density: 0.5
|
237 |
+
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
|
238 |
+
merge_method: dare_ties
|
239 |
+
base_model: NousResearch/Meta-Llama-3-8B-Instruct
|
240 |
+
parameters:
|
241 |
+
normalize: false
|
242 |
+
int8_mask: true
|
243 |
+
dtype: bfloat16
|
244 |
+
```
|
245 |
+
|
246 |
+
### Eggs-and-Bread-IQ-pt.2
|
247 |
+
|
248 |
+
```yaml
|
249 |
+
models:
|
250 |
+
- model: NousResearch/Meta-Llama-3-8B-Instruct
|
251 |
+
- model: turboderp/llama3-turbcat-instruct-8b
|
252 |
+
parameters:
|
253 |
+
gamma: 0.01
|
254 |
+
density: 0.9
|
255 |
+
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
|
256 |
+
- model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
|
257 |
+
parameters:
|
258 |
+
gamma: 0.01
|
259 |
+
density: 0.9
|
260 |
+
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
|
261 |
+
- model: TIGER-Lab/MAmmoTH2-8B-Plus
|
262 |
+
parameters:
|
263 |
+
gamma: 0.01
|
264 |
+
density: 0.9
|
265 |
+
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
|
266 |
+
- model: jondurbin/bagel-8b-v1.0
|
267 |
+
parameters:
|
268 |
+
gamma: 0.01
|
269 |
+
density: 0.9
|
270 |
+
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
|
271 |
+
- model: abacusai/Llama-3-Smaug-8B
|
272 |
+
parameters:
|
273 |
+
gamma: 0.01
|
274 |
+
density: 0.9
|
275 |
+
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
|
276 |
+
merge_method: breadcrumbs_ties
|
277 |
+
base_model: NousResearch/Meta-Llama-3-8B-Instruct
|
278 |
+
parameters:
|
279 |
+
normalize: false
|
280 |
+
int8_mask: true
|
281 |
+
dtype: bfloat16
|
282 |
+
```
|
283 |
+
|
284 |
+
### Eggs-and-Bread-IQ
|
285 |
+
|
286 |
+
```yaml
|
287 |
+
models:
|
288 |
+
- model: Casual-Autopsy/Eggs-and-Bread-IQ-pt.1
|
289 |
+
- model: Casual-Autopsy/Eggs-and-Bread-IQ-pt.2
|
290 |
+
merge_method: slerp
|
291 |
+
base_model: Casual-Autopsy/Eggs-and-Bread-IQ-pt.1
|
292 |
+
parameters:
|
293 |
+
t:
|
294 |
+
- filter: self_attn
|
295 |
+
value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
|
296 |
+
- filter: mlp
|
297 |
+
value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
|
298 |
+
- value: 0.5
|
299 |
+
dtype: bfloat16
|
300 |
+
```
|
301 |
+
|
302 |
+
### Eggs-and-Bread-Uncen-pt.1
|
303 |
+
|
304 |
+
```yaml
|
305 |
+
models:
|
306 |
+
- model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
|
307 |
+
- model: AwanLLM/Awanllm-Llama-3-8B-Cumulus-v1.0
|
308 |
+
parameters:
|
309 |
+
density: 0.5
|
310 |
+
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
|
311 |
+
- model: lodrick-the-lafted/Limon-8B
|
312 |
+
parameters:
|
313 |
+
density: 0.5
|
314 |
+
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
|
315 |
+
- model: vicgalle/Configurable-Llama-3-8B-v0.3
|
316 |
+
parameters:
|
317 |
+
density: 0.5
|
318 |
+
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
|
319 |
+
- model: Undi95/Llama3-Unholy-8B-OAS
|
320 |
+
parameters:
|
321 |
+
density: 0.5
|
322 |
+
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
|
323 |
+
- model: Undi95/Unholy-8B-DPO-OAS
|
324 |
+
parameters:
|
325 |
+
density: 0.5
|
326 |
+
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
|
327 |
+
merge_method: dare_ties
|
328 |
+
base_model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
|
329 |
+
parameters:
|
330 |
+
normalize: false
|
331 |
+
int8_mask: true
|
332 |
+
dtype: bfloat16
|
333 |
+
```
|
334 |
+
|
335 |
+
### Eggs-and-Bread-Uncen-pt.2
|
336 |
+
|
337 |
+
```yaml
|
338 |
+
models:
|
339 |
+
- model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
|
340 |
+
- model: AwanLLM/Awanllm-Llama-3-8B-Cumulus-v1.0
|
341 |
+
parameters:
|
342 |
+
gamma: 0.01
|
343 |
+
density: 0.9
|
344 |
+
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
|
345 |
+
- model: lodrick-the-lafted/Limon-8B
|
346 |
+
parameters:
|
347 |
+
gamma: 0.01
|
348 |
+
density: 0.9
|
349 |
+
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
|
350 |
+
- model: vicgalle/Configurable-Llama-3-8B-v0.3
|
351 |
+
parameters:
|
352 |
+
gamma: 0.01
|
353 |
+
density: 0.9
|
354 |
+
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
|
355 |
+
- model: Undi95/Llama3-Unholy-8B-OAS
|
356 |
+
parameters:
|
357 |
+
gamma: 0.01
|
358 |
+
density: 0.9
|
359 |
+
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
|
360 |
+
- model: Undi95/Unholy-8B-DPO-OAS
|
361 |
+
parameters:
|
362 |
+
gamma: 0.01
|
363 |
+
density: 0.9
|
364 |
+
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
|
365 |
+
merge_method: breadcrumbs_ties
|
366 |
+
base_model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
|
367 |
+
parameters:
|
368 |
+
normalize: false
|
369 |
+
int8_mask: true
|
370 |
+
dtype: bfloat16
|
371 |
+
```
|
372 |
+
|
373 |
+
### Eggs-and-Bread-Uncen
|
374 |
+
|
375 |
+
```yaml
|
376 |
+
models:
|
377 |
+
- model: Casual-Autopsy/Eggs-and-Bread-Uncen-pt.1
|
378 |
+
- model: Casual-Autopsy/Eggs-and-Bread-Uncen-pt.2
|
379 |
+
merge_method: slerp
|
380 |
+
base_model: Casual-Autopsy/Eggs-and-Bread-Uncen-pt.1
|
381 |
+
parameters:
|
382 |
+
t:
|
383 |
+
- filter: self_attn
|
384 |
+
value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
|
385 |
+
- filter: mlp
|
386 |
+
value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
|
387 |
+
- value: 0.5
|
388 |
+
dtype: bfloat16
|
389 |
+
```
|
390 |
+
|
391 |
+
### Scrambled-Eggs-On-Toast-1
|
392 |
+
|
393 |
+
```yaml
|
394 |
+
models:
|
395 |
+
- model: Casual-Autopsy/Eggs-and-Bread-RP
|
396 |
+
- model: Casual-Autopsy/Eggs-and-Bread-Uncen
|
397 |
+
merge_method: slerp
|
398 |
+
base_model: Casual-Autopsy/Eggs-and-Bread-RP
|
399 |
+
parameters:
|
400 |
+
t:
|
401 |
+
- value: [0.1, 0.15, 0.2, 0.4, 0.6, 0.4, 0.2, 0.15, 0.1]
|
402 |
+
dtype: bfloat16
|
403 |
+
```
|
404 |
+
|
405 |
+
### L3-Scrambled-Eggs-On-Toast-8B
|
406 |
+
|
407 |
+
```yaml
|
408 |
+
models:
|
409 |
+
- model: Casual-Autopsy/Scrambled-Eggs-On-Toast-1
|
410 |
+
- model: Casual-Autopsy/Eggs-and-Bread-IQ
|
411 |
+
merge_method: slerp
|
412 |
+
base_model: Casual-Autopsy/Scrambled-Eggs-On-Toast-1
|
413 |
+
parameters:
|
414 |
+
t:
|
415 |
+
- value: [0.7, 0.5, 0.3, 0.25, 0.2, 0.25, 0.3, 0.5, 0.7]
|
416 |
+
dtype: bfloat16
|
417 |
+
```
|