DavidAU commited on
Commit
86ddcdc
·
verified ·
1 Parent(s): ad71ba8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -58
README.md CHANGED
@@ -1,58 +1,62 @@
1
- ---
2
- base_model: []
3
- library_name: transformers
4
- tags:
5
- - mergekit
6
- - merge
7
-
8
- ---
9
- L3-SMB-Grand-Story-WTFrack-Instruct-18.05B
10
-
11
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
12
-
13
- ## Merge Details
14
- ### Merge Method
15
-
16
- This model was merged using the passthrough merge method.
17
-
18
- ### Models Merged
19
-
20
- The following models were included in the merge:
21
- * j:/Grand-Story-V1-F32-Ultra-Quality-16_5B
22
- * G:/7B/Meta-Llama-3-8B-Instruct
23
-
24
- ### Configuration
25
-
26
- The following YAML configuration was used to produce this model:
27
-
28
- ```yaml
29
- slices:
30
- - sources:
31
- - model: G:/7B/Meta-Llama-3-8B-Instruct
32
- layer_range: [0, 12]
33
- - sources:
34
- - model: j:/Grand-Story-V1-F32-Ultra-Quality-16_5B
35
- layer_range: [6, 70]
36
- - sources:
37
- - model: G:/7B/Meta-Llama-3-8B-Instruct
38
- layer_range: [31,32]
39
- parameters:
40
- scale:
41
- - filter: o_proj
42
- value: 0.25
43
- - filter: down_proj
44
- value: 0.25
45
- - value: .50
46
- - sources:
47
- - model: j:/Grand-Story-V1-F32-Ultra-Quality-16_5B
48
- layer_range: [70,71]
49
- parameters:
50
- scale:
51
- - filter: o_proj
52
- value: 1
53
- - filter: down_proj
54
- value: 1
55
- - value: 1
56
- merge_method: passthrough
57
- dtype: float16
58
- ```
 
 
 
 
 
1
+ ---
2
+ base_model: []
3
+ library_name: transformers
4
+ tags:
5
+ - mergekit
6
+ - merge
7
+
8
+ ---
9
+ <h2>L3-SMB-Grand-Story-WTFrack-Instruct-18.05B</h2>
10
+
11
+ For GGUFS and full model card please go to:
12
+
13
+ [ https://huggingface.co/DavidAU/L3-SMB-Grand-Story-WTFrack-Instruct-18.05B-GGUF ]
14
+
15
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
16
+
17
+ ## Merge Details
18
+ ### Merge Method
19
+
20
+ This model was merged using the passthrough merge method.
21
+
22
+ ### Models Merged
23
+
24
+ The following models were included in the merge:
25
+ * j:/Grand-Story-V1-F32-Ultra-Quality-16_5B
26
+ * G:/7B/Meta-Llama-3-8B-Instruct
27
+
28
+ ### Configuration
29
+
30
+ The following YAML configuration was used to produce this model:
31
+
32
+ ```yaml
33
+ slices:
34
+ - sources:
35
+ - model: G:/7B/Meta-Llama-3-8B-Instruct
36
+ layer_range: [0, 12]
37
+ - sources:
38
+ - model: j:/Grand-Story-V1-F32-Ultra-Quality-16_5B
39
+ layer_range: [6, 70]
40
+ - sources:
41
+ - model: G:/7B/Meta-Llama-3-8B-Instruct
42
+ layer_range: [31,32]
43
+ parameters:
44
+ scale:
45
+ - filter: o_proj
46
+ value: 0.25
47
+ - filter: down_proj
48
+ value: 0.25
49
+ - value: .50
50
+ - sources:
51
+ - model: j:/Grand-Story-V1-F32-Ultra-Quality-16_5B
52
+ layer_range: [70,71]
53
+ parameters:
54
+ scale:
55
+ - filter: o_proj
56
+ value: 1
57
+ - filter: down_proj
58
+ value: 1
59
+ - value: 1
60
+ merge_method: passthrough
61
+ dtype: float16
62
+ ```