aashish1904 commited on
Commit
c28d7f8
·
verified ·
1 Parent(s): 7bfba46

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +246 -0
README.md ADDED
@@ -0,0 +1,246 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ license: cc-by-nc-4.0
5
+ library_name: transformers
6
+ tags:
7
+ - mergekit
8
+ - merge
9
+ - alpaca
10
+ - mistral
11
+ - not-for-all-audiences
12
+ - nsfw
13
+ base_model: []
14
+ model-index:
15
+ - name: Ice0.40-20.11-RP, IceDrunkenCherryRP-7b
16
+ results:
17
+ - task:
18
+ type: text-generation
19
+ name: Text Generation
20
+ dataset:
21
+ name: IFEval (0-Shot)
22
+ type: HuggingFaceH4/ifeval
23
+ args:
24
+ num_few_shot: 0
25
+ metrics:
26
+ - type: inst_level_strict_acc and prompt_level_strict_acc
27
+ value: 47.63
28
+ name: strict accuracy
29
+ source:
30
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=icefog72/Ice0.40-20.11-RP
31
+ name: Open LLM Leaderboard
32
+ - task:
33
+ type: text-generation
34
+ name: Text Generation
35
+ dataset:
36
+ name: BBH (3-Shot)
37
+ type: BBH
38
+ args:
39
+ num_few_shot: 3
40
+ metrics:
41
+ - type: acc_norm
42
+ value: 31.51
43
+ name: normalized accuracy
44
+ source:
45
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=icefog72/Ice0.40-20.11-RP
46
+ name: Open LLM Leaderboard
47
+ - task:
48
+ type: text-generation
49
+ name: Text Generation
50
+ dataset:
51
+ name: MATH Lvl 5 (4-Shot)
52
+ type: hendrycks/competition_math
53
+ args:
54
+ num_few_shot: 4
55
+ metrics:
56
+ - type: exact_match
57
+ value: 6.27
58
+ name: exact match
59
+ source:
60
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=icefog72/Ice0.40-20.11-RP
61
+ name: Open LLM Leaderboard
62
+ - task:
63
+ type: text-generation
64
+ name: Text Generation
65
+ dataset:
66
+ name: GPQA (0-shot)
67
+ type: Idavidrein/gpqa
68
+ args:
69
+ num_few_shot: 0
70
+ metrics:
71
+ - type: acc_norm
72
+ value: 7.61
73
+ name: acc_norm
74
+ source:
75
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=icefog72/Ice0.40-20.11-RP
76
+ name: Open LLM Leaderboard
77
+ - task:
78
+ type: text-generation
79
+ name: Text Generation
80
+ dataset:
81
+ name: MuSR (0-shot)
82
+ type: TAUR-Lab/MuSR
83
+ args:
84
+ num_few_shot: 0
85
+ metrics:
86
+ - type: acc_norm
87
+ value: 14.27
88
+ name: acc_norm
89
+ source:
90
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=icefog72/Ice0.40-20.11-RP
91
+ name: Open LLM Leaderboard
92
+ - task:
93
+ type: text-generation
94
+ name: Text Generation
95
+ dataset:
96
+ name: MMLU-PRO (5-shot)
97
+ type: TIGER-Lab/MMLU-Pro
98
+ config: main
99
+ split: test
100
+ args:
101
+ num_few_shot: 5
102
+ metrics:
103
+ - type: acc
104
+ value: 23.32
105
+ name: accuracy
106
+ source:
107
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=icefog72/Ice0.40-20.11-RP
108
+ name: Open LLM Leaderboard
109
+
110
+ ---
111
+
112
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
113
+
114
+
115
+ # QuantFactory/IceDrunkenCherryRP-7b-GGUF
116
+ This is quantized version of [icefog72/IceDrunkenCherryRP-7b](https://huggingface.co/icefog72/IceDrunkenCherryRP-7b) created using llama.cpp
117
+
118
+ # Original Model Card
119
+
120
+ # IceDrunkenCherryRP-7b (Ice0.40-20.11-RP)
121
+
122
+
123
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/63407b719dbfe0d48b2d763b/yZyKgbTduNQ2oVFvs8IRm.png)
124
+
125
+ > [!IMPORTANT]
126
+ > [ST settings, rules-lorebook look here](https://huggingface.co/icefog72/GeneralInfoToStoreNotModel/tree/main/ByModel/IceDrunkenCherryRP)
127
+
128
+ > [!TIP]
129
+ > Get last version of rules, look model's chat response exemples or ask me a questions you can
130
+ > **[here](https://discord.gg/2tJcWeMjFQ)**.
131
+ > on my new AI related discord server for feedback, questions and other stuff.
132
+
133
+ > [!NOTE]
134
+ > In general Alpaca format will work.
135
+
136
+ > [ko-fi To buy sweets for my cat :3](https://ko-fi.com/icefog72)
137
+
138
+ It shoud handle 16-25k context window, maybe 32k.
139
+
140
+
141
+ ## Exl2 Quants
142
+
143
+ > [!WARNING]
144
+ >- [4.2bpw-exl2](https://huggingface.co/icefog72/IceDrunkenCherryRP-7b-4.2bpw-exl2)
145
+ >- [6.5bpw-exl2](https://huggingface.co/icefog72/IceDrunkenCherryRP-7b-6.5bpw-exl2)
146
+ >- [8bpw-exl2](https://huggingface.co/icefog72/IceDrunkenCherryRP-7b-8bpw-exl2)
147
+
148
+ ## Thx mradermacher for GGUF
149
+
150
+ > [!WARNING]
151
+ >- [GGUF](https://huggingface.co/mradermacher/IceDrunkenCherryRP-7b-GGUF)
152
+ >- [i1-GGUF](https://huggingface.co/mradermacher/IceDrunkenCherryRP-7b-i1-GGUF)
153
+
154
+ ## Download
155
+
156
+ I recommend using the `huggingface-hub` Python library:
157
+
158
+ > [!TIP]
159
+ > ```shell
160
+ > pip3 install huggingface-hub
161
+ > ```
162
+
163
+ To download the `main` branch to a folder called `IceDrunkenCherryRP-7b`:
164
+
165
+ > [!TIP]
166
+ > ```shell
167
+ > mkdir IceDrunkenCherryRP-7b
168
+ > huggingface-cli download icefog72/IceDrunkenCherryRP-7b --local-dir IceDrunkenCherryRP-7b --local-dir-use-symlinks False
169
+ > ```
170
+
171
+ <details>
172
+ <summary>More advanced huggingface-cli download usage</summary>
173
+
174
+ If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
175
+
176
+ The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
177
+
178
+ For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
179
+
180
+ To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
181
+
182
+ ```shell
183
+ pip3 install hf_transfer
184
+ ```
185
+
186
+ And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
187
+
188
+ ```shell
189
+ mkdir FOLDERNAME
190
+ HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download MODEL --local-dir FOLDERNAME --local-dir-use-symlinks False
191
+ ```
192
+
193
+ Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
194
+ </details>
195
+
196
+ ### Merge Method
197
+
198
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
199
+ This model was merged using the SLERP merge method.
200
+
201
+ ### Models Merged
202
+
203
+ The following models were included in the merge:
204
+ * icefog72/Ice0.29-06.11-RP
205
+ * icefog72/Ice0.37-18.11-RP
206
+
207
+ ### Configuration
208
+
209
+ The following YAML configuration was used to produce this model:
210
+
211
+ ```yaml
212
+ slices:
213
+ - sources:
214
+ - model: icefog72/Ice0.37-18.11-RP
215
+ layer_range: [0, 32]
216
+ - model: icefog72/Ice0.29-06.11-RP
217
+ layer_range: [0, 32]
218
+
219
+ merge_method: slerp
220
+ base_model: icefog72/Ice0.37-18.11-RP
221
+ parameters:
222
+ t:
223
+ - filter: self_attn
224
+ value: [0, 0.5, 0.3, 0.7, 1]
225
+ - filter: mlp
226
+ value: [1, 0.5, 0.7, 0.3, 0]
227
+ - value: 0.5 # fallback for rest of tensors
228
+ dtype: bfloat16
229
+
230
+
231
+ ```
232
+
233
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
234
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/icefog72__Ice0.40-20.11-RP-details)
235
+
236
+ | Metric |Value|
237
+ |-------------------|----:|
238
+ |Avg. |21.77|
239
+ |IFEval (0-Shot) |47.63|
240
+ |BBH (3-Shot) |31.51|
241
+ |MATH Lvl 5 (4-Shot)| 6.27|
242
+ |GPQA (0-shot) | 7.61|
243
+ |MuSR (0-shot) |14.27|
244
+ |MMLU-PRO (5-shot) |23.32|
245
+
246
+