Undi95 commited on
Commit
d11defc
·
1 Parent(s): 734642c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +110 -0
README.md ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ ---
4
+ This model should be fixed, it was MEANT to be BF16.
5
+
6
+ Don't mind this one at the moment, I need to finetune it for RP, it's just a test.
7
+
8
+ ## Description
9
+
10
+ This repo contains fp16 files of Mistral-11B-OmniMix.
11
+
12
+ My goal for this model was only to make it score the highest possible with merge and layer toying, proving that:
13
+ - Benchmark are objective
14
+ - You should try a model yourself and don't go blindly to the highest rated one
15
+ - Merge/Layer toying CAN be usable to do better model (maybe?)
16
+
17
+
18
+ ## Model used
19
+ - [Mistral-7B-OpenOrca](https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca)
20
+ - [Mistral-7B-v0.1-Open-Platypus](akjindal53244/Mistral-7B-v0.1-Open-Platypus)
21
+ - [CollectiveCognition-v1.1-Mistral-7B](https://huggingface.co/teknium/CollectiveCognition-v1.1-Mistral-7B)
22
+ - [zephyr-7b-alpha](HuggingFaceH4/zephyr-7b-alpha)
23
+
24
+
25
+
26
+ ## Prompt template: Alpaca or default
27
+
28
+ ```
29
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
30
+
31
+ ### Instruction:
32
+ {prompt}
33
+
34
+ ### Response:
35
+
36
+ ```
37
+
38
+ ```
39
+ USER: <prompt>
40
+ ASSISTANT:
41
+ ```
42
+
43
+ Or use any prompting system from one of the 4 source model, should work.
44
+
45
+ ## The secret sauce
46
+
47
+ Mistral-11B-OpenOrcaPlatypus :
48
+ ```
49
+ slices:
50
+ - sources:
51
+ - model: Open-Orca/Mistral-7B-OpenOrca
52
+ layer_range: [0, 24]
53
+ - sources:
54
+ - model: akjindal53244/Mistral-7B-v0.1-Open-Platypus
55
+ layer_range: [8, 32]
56
+ merge_method: passthrough
57
+ dtype: bfloat16
58
+ ```
59
+
60
+ Mistral-11B-CC-Zephyr :
61
+ ```
62
+ slices:
63
+ - sources:
64
+ - model: "/content/drive/MyDrive/CC-v1.1-7B-bf16"
65
+ layer_range: [0, 24]
66
+ - sources:
67
+ - model: "/content/drive/MyDrive/Zephyr-7B"
68
+ layer_range: [8, 32]
69
+ merge_method: passthrough
70
+ dtype: bfloat16
71
+ ```
72
+
73
+ Mistral-11B-OmniMix :
74
+ ```
75
+ slices:
76
+ - sources:
77
+ - model: Mistral-11B-OpenOrcaPlatypus
78
+ layer_range: [0, 48]
79
+ - model: Mistral-11B-CC-Zephyr
80
+ layer_range: [0, 48]
81
+ merge_method: slerp
82
+ base_model: Mistral-11B-OpenOrcaPlatypus
83
+ parameters:
84
+ t:
85
+ - filter: lm_head
86
+ value: [0.75]
87
+ - filter: embed_tokens
88
+ value: [0.75]
89
+ - filter: self_attn
90
+ value: [0.75, 0.25]
91
+ - filter: mlp
92
+ value: [0.25, 0.75]
93
+ - filter: layernorm
94
+ value: [0.5, 0.5]
95
+ - filter: modelnorm
96
+ value: [0.75]
97
+ - value: 0.5 # fallback for rest of tensors
98
+ dtype: bfloat16
99
+ ```
100
+ I use [mergekit](https://github.com/cg123/mergekit) for all the manipulation told here.
101
+
102
+ ## Some scoring I done myself
103
+
104
+ Coming later.
105
+
106
+ ## Others
107
+
108
+ Special thanks to Sushi, [Henky](https://github.com/KoboldAI/KoboldAI-Client) for the machine he give me for big task, and [Charles Goddard](https://github.com/cg123) for his amazing tool.
109
+
110
+ If you want to support me, you can [here](https://ko-fi.com/undiai).