Update README.md
Browse files
README.md
CHANGED
@@ -28,6 +28,202 @@ The following models were included in the merge:
|
|
28 |
The following YAML configuration was used to produce this model:
|
29 |
|
30 |
```yaml
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
slices:
|
32 |
- sources:
|
33 |
- model: v000000/HaloMaidRP-v1.32-15B-Sapphire
|
@@ -46,3 +242,4 @@ parameters:
|
|
46 |
dtype: bfloat16
|
47 |
|
48 |
```
|
|
|
|
28 |
The following YAML configuration was used to produce this model:
|
29 |
|
30 |
```yaml
|
31 |
+
# Recipe
|
32 |
+
```yaml
|
33 |
+
#1. Take a collection of RP and Storywriter 8b models and merge them.
|
34 |
+
|
35 |
+
dtype: float32
|
36 |
+
merge_method: linear
|
37 |
+
weight: 0.15
|
38 |
+
parameters:
|
39 |
+
- model: tokyotech-llm/Llama-3-Swallow-8B-v0.1
|
40 |
+
weight: 0.4
|
41 |
+
parameters:
|
42 |
+
- model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
|
43 |
+
weight: 0.1
|
44 |
+
parameters:
|
45 |
+
- model: maldv/llama-3-fantasy-writer-8b
|
46 |
+
weight: 0.6
|
47 |
+
parameters:
|
48 |
+
- model: Nitral-AI/Hathor_Respawn-L3-8B-v0.8
|
49 |
+
|
50 |
+
#2. Use task-arithmetic to learn the vector directions from the RP-Mix onto Llama-3-SPPO which is the smartest 8B model imo, this way we can preserve Meta's multi-bullion dollar tuning.
|
51 |
+
|
52 |
+
models:
|
53 |
+
dtype: float32
|
54 |
+
normalize: false
|
55 |
+
parameters:
|
56 |
+
base_model: UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3
|
57 |
+
merge_method: task_arithmetic
|
58 |
+
weight: 0.35
|
59 |
+
parameters:
|
60 |
+
- model: rpmix-part1
|
61 |
+
weight: 1.0
|
62 |
+
parameters:
|
63 |
+
- model: UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3
|
64 |
+
|
65 |
+
#2,5. Apply abliteration to the previous model
|
66 |
+
|
67 |
+
models:
|
68 |
+
dtype: float32
|
69 |
+
merge_method: linear
|
70 |
+
weight: 1.0
|
71 |
+
parameters:
|
72 |
+
- model: sppo-rpmix-part2+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
|
73 |
+
|
74 |
+
#3. Create an abliterated version of Stheno3.2-8B as we will use this in the 15B frankenmerge.
|
75 |
+
|
76 |
+
models:
|
77 |
+
dtype: float32
|
78 |
+
merge_method: linear
|
79 |
+
weight: 1.0
|
80 |
+
parameters:
|
81 |
+
- model: Sao10K/L3-8B-Stheno-v3.2+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
|
82 |
+
|
83 |
+
#4. Make an inverted version of a Llama-3-15B Frankenmerge with the previous models.
|
84 |
+
|
85 |
+
models:
|
86 |
+
model: v000000/L3-8B-Stheno-v3.2-abliterated
|
87 |
+
- layer_range: [24, 32]
|
88 |
+
- sources:
|
89 |
+
model: v000000/SwallowMaid-8B-L3-SPPO-abliterated
|
90 |
+
- layer_range: [8, 24]
|
91 |
+
- sources:
|
92 |
+
parameters:
|
93 |
+
model: v000000/L3-8B-Stheno-v3.2-abliterated
|
94 |
+
- layer_range: [8, 24]
|
95 |
+
- sources:
|
96 |
+
model: v000000/SwallowMaid-8B-L3-SPPO-abliterated
|
97 |
+
- layer_range: [0, 24]
|
98 |
+
- sources:
|
99 |
+
slices:
|
100 |
+
|
101 |
+
#5. Make an non-inverted version of a Llama-3-15B Frankenmerge with the previous models.
|
102 |
+
merge_method: passthrough
|
103 |
+
dtype: float32
|
104 |
+
model: v000000/SwallowMaid-8B-L3-SPPO-abliterated
|
105 |
+
- layer_range: [24, 32]
|
106 |
+
- sources:
|
107 |
+
model: v000000/L3-8B-Stheno-v3.2-abliterated
|
108 |
+
- layer_range: [8, 24]
|
109 |
+
- sources:
|
110 |
+
model: v000000/SwallowMaid-8B-L3-SPPO-abliterated
|
111 |
+
- layer_range: [8, 24]
|
112 |
+
- sources:
|
113 |
+
model: v000000/L3-8B-Stheno-v3.2-abliterated
|
114 |
+
- layer_range: [0, 24]
|
115 |
+
- sources:
|
116 |
+
slices:
|
117 |
+
|
118 |
+
#6. Test the previous two models and determine which is better in the output/input stage and which is best in the middle and we slerp them in a v-shape.
|
119 |
+
|
120 |
+
merge_method: passthrough
|
121 |
+
dtype: float32
|
122 |
+
t: [0, 0.5, 1, 0.5, 0]
|
123 |
+
parameters:
|
124 |
+
dtype: float32
|
125 |
+
base_model: v000000/Sthalomaid-15B-abliterated
|
126 |
+
merge_method: slerp
|
127 |
+
- model: v000000/Sthalomaid-15B-Inverted-abliterated
|
128 |
+
- model: v000000/Sthalomaid-15B-abliterated
|
129 |
+
|
130 |
+
#7. Apply Blackroot Lora in a model_stock merge of the different models so far
|
131 |
+
|
132 |
+
models:
|
133 |
+
dtype: float32
|
134 |
+
merge_method: model_stock
|
135 |
+
base_model: v000000/Sthalomaid-V-15B-abliterated
|
136 |
+
- model: v000000/Sthalomaid-15B-Inverted-abliterated+Blackroot/Llama-3-8B-Abomination-LORA
|
137 |
+
- model: v000000/Sthalomaid-15B-abliterated+Blackroot/Llama-3-8B-Abomination-LORA
|
138 |
+
- model: v000000/Sthalomaid-V-15B-abliterated+Blackroot/Llama-3-8B-Abomination-LORA #seems to work on 15b
|
139 |
+
- model: v000000/Sthalomaid-15B-Inverted-abliterated
|
140 |
+
- model: v000000/Sthalomaid-15B-abliterated
|
141 |
+
- model: v000000/Sthalomaid-V-15B-abliterated
|
142 |
+
|
143 |
+
#7. Create another 15B frankenmerge from just SPPO and abiterate it, this is so we can merge in a smarter model.
|
144 |
+
|
145 |
+
models:
|
146 |
+
dtype: float32
|
147 |
+
merge_method: passthrough
|
148 |
+
slices:
|
149 |
+
- sources:
|
150 |
+
- layer_range: [0, 24]
|
151 |
+
model: UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
|
152 |
+
- sources:
|
153 |
+
- layer_range: [8, 24]
|
154 |
+
model: UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
|
155 |
+
parameters:
|
156 |
+
- sources:
|
157 |
+
- layer_range: [8, 24]
|
158 |
+
model: UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
|
159 |
+
- sources:
|
160 |
+
- layer_range: [24, 32]
|
161 |
+
model: UCLA-AGI/Llama-3-Instruct-8B-SPPO-Iter3+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
|
162 |
+
|
163 |
+
#8. Learn vectors from our previous blackroot model_stock model to smarter SPPO-Iter model to preserve RP capabilities.
|
164 |
+
|
165 |
+
models:
|
166 |
+
- model: v000000/HaloMaidRP-V-15B-Blackroot-v0.1
|
167 |
+
parameters:
|
168 |
+
weight: 1.3
|
169 |
+
merge_method: task_arithmetic
|
170 |
+
base_model: v000000/Llama-3-Instruct-15B-SPPO-Iter3-abliterated
|
171 |
+
parameters:
|
172 |
+
normalize: false
|
173 |
+
|
174 |
+
#9. Merge the blackroot model_stock-15B and SPPO-15B models together with a smooth gradient.
|
175 |
+
|
176 |
+
dtype: float32
|
177 |
+
slices:
|
178 |
+
- sources:
|
179 |
+
- model: v000000/HaloMaidRP-V-15B-Blackroot-v0.1
|
180 |
+
layer_range: [0, 64]
|
181 |
+
- model: v000000/HaloMaidRP-V-15B-Blackroot-v0.223
|
182 |
+
layer_range: [0, 64]
|
183 |
+
merge_method: slerp
|
184 |
+
base_model: v000000/HaloMaidRP-V-15B-Blackroot-v0.223
|
185 |
+
parameters:
|
186 |
+
t:
|
187 |
+
- filter: self_attn
|
188 |
+
value: [0, 0.5, 0.3, 0.7, 1, 0.1, 0.6, 0.3, 0.8, 0.5]
|
189 |
+
- filter: mlp
|
190 |
+
value: [1, 0.5, 0.7, 0.3, 0, 0.3, 0.4, 0.7, 0.2, 0.5]
|
191 |
+
- value: 0.5
|
192 |
+
dtype: bfloat16 #Oops accidentally swtich to half precision do this also very important
|
193 |
+
|
194 |
+
#10. Heal the layers, o_proj and down_proj seems to be the only tensors that determine adaptation to a new architecture, so we can steal them from an already finetuned 15B,
|
195 |
+
#this way we don't need to finetune our new frankenmerge at all to have full performance. Why reinvent the wheel?
|
196 |
+
#sapphire
|
197 |
+
models:
|
198 |
+
- model: v000000/HaloMaidRP1_component
|
199 |
+
merge_method: slerp
|
200 |
+
base_model: ZeusLabs/L3-Aethora-15B-V2
|
201 |
+
parameters:
|
202 |
+
t:
|
203 |
+
- filter: o_proj
|
204 |
+
value: 0
|
205 |
+
- filter: down_proj
|
206 |
+
value: 0
|
207 |
+
- value: 1
|
208 |
+
dtype: bfloat16
|
209 |
+
|
210 |
+
#11. Go back to an earlier checkpoint that had interesting results with being very depraved before the blackroot model_stock merge and do the same as (10.) to heal it.
|
211 |
+
#ruby
|
212 |
+
models:
|
213 |
+
- model: v000000/component____HaloMaidRP-V
|
214 |
+
merge_method: slerp
|
215 |
+
base_model: ZeusLabs/L3-Aethora-15B-V2
|
216 |
+
parameters:
|
217 |
+
t:
|
218 |
+
- filter: o_proj
|
219 |
+
value: 0
|
220 |
+
- filter: down_proj
|
221 |
+
value: 0
|
222 |
+
- value: 1
|
223 |
+
dtype: bfloat16
|
224 |
+
|
225 |
+
#12. Then we merge these two together to get a semi-depraved smart model.
|
226 |
+
#emerald
|
227 |
slices:
|
228 |
- sources:
|
229 |
- model: v000000/HaloMaidRP-v1.32-15B-Sapphire
|
|
|
242 |
dtype: bfloat16
|
243 |
|
244 |
```
|
245 |
+
```
|