RichardErkhov commited on
Commit
2361b8c
Β·
verified Β·
1 Parent(s): 44be30e

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +270 -0
README.md ADDED
@@ -0,0 +1,270 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ SOLAR-math-2x10.7b-v0.2 - GGUF
11
+ - Model creator: https://huggingface.co/macadeliccc/
12
+ - Original model: https://huggingface.co/macadeliccc/SOLAR-math-2x10.7b-v0.2/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [SOLAR-math-2x10.7b-v0.2.Q2_K.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_SOLAR-math-2x10.7b-v0.2-gguf/blob/main/SOLAR-math-2x10.7b-v0.2.Q2_K.gguf) | Q2_K | 6.58GB |
18
+ | [SOLAR-math-2x10.7b-v0.2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_SOLAR-math-2x10.7b-v0.2-gguf/blob/main/SOLAR-math-2x10.7b-v0.2.IQ3_XS.gguf) | IQ3_XS | 7.34GB |
19
+ | [SOLAR-math-2x10.7b-v0.2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_SOLAR-math-2x10.7b-v0.2-gguf/blob/main/SOLAR-math-2x10.7b-v0.2.IQ3_S.gguf) | IQ3_S | 7.75GB |
20
+ | [SOLAR-math-2x10.7b-v0.2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_SOLAR-math-2x10.7b-v0.2-gguf/blob/main/SOLAR-math-2x10.7b-v0.2.Q3_K_S.gguf) | Q3_K_S | 7.73GB |
21
+ | [SOLAR-math-2x10.7b-v0.2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_SOLAR-math-2x10.7b-v0.2-gguf/blob/main/SOLAR-math-2x10.7b-v0.2.IQ3_M.gguf) | IQ3_M | 7.94GB |
22
+ | [SOLAR-math-2x10.7b-v0.2.Q3_K.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_SOLAR-math-2x10.7b-v0.2-gguf/blob/main/SOLAR-math-2x10.7b-v0.2.Q3_K.gguf) | Q3_K | 8.59GB |
23
+ | [SOLAR-math-2x10.7b-v0.2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_SOLAR-math-2x10.7b-v0.2-gguf/blob/main/SOLAR-math-2x10.7b-v0.2.Q3_K_M.gguf) | Q3_K_M | 8.59GB |
24
+ | [SOLAR-math-2x10.7b-v0.2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_SOLAR-math-2x10.7b-v0.2-gguf/blob/main/SOLAR-math-2x10.7b-v0.2.Q3_K_L.gguf) | Q3_K_L | 9.32GB |
25
+ | [SOLAR-math-2x10.7b-v0.2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_SOLAR-math-2x10.7b-v0.2-gguf/blob/main/SOLAR-math-2x10.7b-v0.2.IQ4_XS.gguf) | IQ4_XS | 9.66GB |
26
+ | [SOLAR-math-2x10.7b-v0.2.Q4_0.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_SOLAR-math-2x10.7b-v0.2-gguf/blob/main/SOLAR-math-2x10.7b-v0.2.Q4_0.gguf) | Q4_0 | 10.09GB |
27
+ | [SOLAR-math-2x10.7b-v0.2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_SOLAR-math-2x10.7b-v0.2-gguf/blob/main/SOLAR-math-2x10.7b-v0.2.IQ4_NL.gguf) | IQ4_NL | 10.19GB |
28
+ | [SOLAR-math-2x10.7b-v0.2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_SOLAR-math-2x10.7b-v0.2-gguf/blob/main/SOLAR-math-2x10.7b-v0.2.Q4_K_S.gguf) | Q4_K_S | 10.17GB |
29
+ | [SOLAR-math-2x10.7b-v0.2.Q4_K.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_SOLAR-math-2x10.7b-v0.2-gguf/blob/main/SOLAR-math-2x10.7b-v0.2.Q4_K.gguf) | Q4_K | 10.79GB |
30
+ | [SOLAR-math-2x10.7b-v0.2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_SOLAR-math-2x10.7b-v0.2-gguf/blob/main/SOLAR-math-2x10.7b-v0.2.Q4_K_M.gguf) | Q4_K_M | 10.79GB |
31
+ | [SOLAR-math-2x10.7b-v0.2.Q4_1.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_SOLAR-math-2x10.7b-v0.2-gguf/blob/main/SOLAR-math-2x10.7b-v0.2.Q4_1.gguf) | Q4_1 | 11.2GB |
32
+ | [SOLAR-math-2x10.7b-v0.2.Q5_0.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_SOLAR-math-2x10.7b-v0.2-gguf/blob/main/SOLAR-math-2x10.7b-v0.2.Q5_0.gguf) | Q5_0 | 12.3GB |
33
+ | [SOLAR-math-2x10.7b-v0.2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_SOLAR-math-2x10.7b-v0.2-gguf/blob/main/SOLAR-math-2x10.7b-v0.2.Q5_K_S.gguf) | Q5_K_S | 12.3GB |
34
+ | [SOLAR-math-2x10.7b-v0.2.Q5_K.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_SOLAR-math-2x10.7b-v0.2-gguf/blob/main/SOLAR-math-2x10.7b-v0.2.Q5_K.gguf) | Q5_K | 12.67GB |
35
+ | [SOLAR-math-2x10.7b-v0.2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_SOLAR-math-2x10.7b-v0.2-gguf/blob/main/SOLAR-math-2x10.7b-v0.2.Q5_K_M.gguf) | Q5_K_M | 12.67GB |
36
+ | [SOLAR-math-2x10.7b-v0.2.Q5_1.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_SOLAR-math-2x10.7b-v0.2-gguf/blob/main/SOLAR-math-2x10.7b-v0.2.Q5_1.gguf) | Q5_1 | 13.41GB |
37
+ | [SOLAR-math-2x10.7b-v0.2.Q6_K.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_SOLAR-math-2x10.7b-v0.2-gguf/blob/main/SOLAR-math-2x10.7b-v0.2.Q6_K.gguf) | Q6_K | 14.66GB |
38
+ | [SOLAR-math-2x10.7b-v0.2.Q8_0.gguf](https://huggingface.co/RichardErkhov/macadeliccc_-_SOLAR-math-2x10.7b-v0.2-gguf/blob/main/SOLAR-math-2x10.7b-v0.2.Q8_0.gguf) | Q8_0 | 18.99GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ license: cc-by-nc-4.0
46
+ model-index:
47
+ - name: SOLAR-math-2x10.7b-v0.2
48
+ results:
49
+ - task:
50
+ type: text-generation
51
+ name: Text Generation
52
+ dataset:
53
+ name: AI2 Reasoning Challenge (25-Shot)
54
+ type: ai2_arc
55
+ config: ARC-Challenge
56
+ split: test
57
+ args:
58
+ num_few_shot: 25
59
+ metrics:
60
+ - type: acc_norm
61
+ value: 70.9
62
+ name: normalized accuracy
63
+ source:
64
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/SOLAR-math-2x10.7b-v0.2
65
+ name: Open LLM Leaderboard
66
+ - task:
67
+ type: text-generation
68
+ name: Text Generation
69
+ dataset:
70
+ name: HellaSwag (10-Shot)
71
+ type: hellaswag
72
+ split: validation
73
+ args:
74
+ num_few_shot: 10
75
+ metrics:
76
+ - type: acc_norm
77
+ value: 88.29
78
+ name: normalized accuracy
79
+ source:
80
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/SOLAR-math-2x10.7b-v0.2
81
+ name: Open LLM Leaderboard
82
+ - task:
83
+ type: text-generation
84
+ name: Text Generation
85
+ dataset:
86
+ name: MMLU (5-Shot)
87
+ type: cais/mmlu
88
+ config: all
89
+ split: test
90
+ args:
91
+ num_few_shot: 5
92
+ metrics:
93
+ - type: acc
94
+ value: 66.25
95
+ name: accuracy
96
+ source:
97
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/SOLAR-math-2x10.7b-v0.2
98
+ name: Open LLM Leaderboard
99
+ - task:
100
+ type: text-generation
101
+ name: Text Generation
102
+ dataset:
103
+ name: TruthfulQA (0-shot)
104
+ type: truthful_qa
105
+ config: multiple_choice
106
+ split: validation
107
+ args:
108
+ num_few_shot: 0
109
+ metrics:
110
+ - type: mc2
111
+ value: 71.68
112
+ source:
113
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/SOLAR-math-2x10.7b-v0.2
114
+ name: Open LLM Leaderboard
115
+ - task:
116
+ type: text-generation
117
+ name: Text Generation
118
+ dataset:
119
+ name: Winogrande (5-shot)
120
+ type: winogrande
121
+ config: winogrande_xl
122
+ split: validation
123
+ args:
124
+ num_few_shot: 5
125
+ metrics:
126
+ - type: acc
127
+ value: 83.5
128
+ name: accuracy
129
+ source:
130
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/SOLAR-math-2x10.7b-v0.2
131
+ name: Open LLM Leaderboard
132
+ - task:
133
+ type: text-generation
134
+ name: Text Generation
135
+ dataset:
136
+ name: GSM8k (5-shot)
137
+ type: gsm8k
138
+ config: main
139
+ split: test
140
+ args:
141
+ num_few_shot: 5
142
+ metrics:
143
+ - type: acc
144
+ value: 64.9
145
+ name: accuracy
146
+ source:
147
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=macadeliccc/SOLAR-math-2x10.7b-v0.2
148
+ name: Open LLM Leaderboard
149
+ ---
150
+ # πŸŒžπŸš€ SOLAR-math-10.7x2-v0.2_19B
151
+
152
+ Merge of two Solar-10.7B instruct finetunes.
153
+
154
+ ![solar](solar.png)
155
+
156
+ This model performs in line with GPT-3.5 and Gemini Pro. Exceeding all scores of Mixtral-8x7b
157
+
158
+ Here is a brief overview of the evaluation results. These are simply for the user to have the values available for comparison. This table does not represent a complete analysis.
159
+ ![solar-math-table](solar-math-table.png)
160
+
161
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6455cc8d679315e4ef16fbec/4JJTfJBZrSe_mX88ybutb.png)
162
+
163
+ ## πŸŒ… Code Example
164
+
165
+ Example also available in [colab](https://colab.research.google.com/drive/10FWCLODU_EFclVOFOlxNYMmSiLilGMBZ?usp=sharing)
166
+
167
+ ```python
168
+ from transformers import AutoModelForCausalLM, AutoTokenizer
169
+
170
+ def generate_response(prompt):
171
+ """
172
+ Generate a response from the model based on the input prompt.
173
+
174
+ Args:
175
+ prompt (str): Prompt for the model.
176
+
177
+ Returns:
178
+ str: The generated response from the model.
179
+ """
180
+ # Tokenize the input prompt
181
+ inputs = tokenizer(prompt, return_tensors="pt")
182
+
183
+ # Generate output tokens
184
+ outputs = model.generate(**inputs, max_new_tokens=512, eos_token_id=tokenizer.eos_token_id, pad_token_id=tokenizer.pad_token_id)
185
+
186
+ # Decode the generated tokens to a string
187
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
188
+
189
+ return response
190
+
191
+
192
+ # Load the model and tokenizer
193
+ model_id = "macadeliccc/SOLAR-math-2x10.7B-v0.2"
194
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
195
+ model = AutoModelForCausalLM.from_pretrained(model_id, load_in_4bit=True)
196
+
197
+ prompt = "Explain the proof of Fermat's Last Theorem and its implications in number theory."
198
+
199
+
200
+ print("Response:")
201
+ print(generate_response(prompt), "\n")
202
+ ```
203
+ **Example output:**
204
+
205
+ Explain the proof of Fermat's Last Theorem and its implications in number theory.
206
+
207
+ Fermat's Last Theorem, also known as FLT, is a famous mathematical conjecture that states "no three positive integers a, b, and c can satisfy the equation a^n + b^n = c^n for any integer value of n greater than 2." This theorem was first proposed by Pierre de Fermat in the 17th century, but its proof was only discovered in the late 20th century by Andrew Wiles.
208
+
209
+ The proof of Fermat's Last Theorem, published by Andrew Wiles in 1993 and 1994, is complex and involves several advanced mathematical concepts. The main idea behind the proof is the use of modular elliptic curves, which are algebraic curves defined by polynomial equations. Wiles introduced a new concept called the Taniyama-Shimura conjecture, which states that there is a one-to-one correspondence between certain elliptic curves over the rational numbers and certain cusp forms.
210
+
211
+ Wiles' proof of FLT is based on the assumption that the Taniyama-Shimura conjecture is true. He showed that if the Taniyama-Shimura conjecture is true, then Fermat's Last Theorem must also be true. This proof strategy is known as a "proof by contradiction." Wiles demonstrated that if FLT were false, then there would exist a counterexample to the Taniyama-Shimura conjecture. However, since the Taniyama-Shimura conjecture is believed to be true, this leads to a contradiction. Therefore, by the principle of contradiction, Fermat's Last Theorem must be true.
212
+
213
+ The implications of Fermat's Last Theorem in number theory are significant. FLT is a fundamental result in the study of integers, and its proof has led to a better understanding of various mathematical concepts. The proof of FLT has also contributed to the development of other areas of mathematics, such as algebraic geometry, representation theory, and number theory itself.
214
+
215
+ Moreover, the theorem has helped to strengthen the foundations of number theory by providing a resolution to a long-standing open problem. It has also encouraged mathematicians to explore new directions in research, as the proof of FLT has opened up new avenues for investigation in related fields.
216
+
217
+
218
+ ## πŸ† Evaluations
219
+
220
+
221
+ ### ARC
222
+ | Task |Version| Metric | Value | |Stderr|
223
+ |-------------|------:|--------------------|-------------|---|------|
224
+ |arc_challenge| 1|acc,none | 0.68| | |
225
+ | | |acc_stderr,none | 0.01| | |
226
+ | | |acc_norm,none | 0.72| | |
227
+ | | |acc_norm_stderr,none| 0.01| | |
228
+ | | |alias |arc_challenge| | |
229
+
230
+ Average: 71.76%
231
+
232
+ ### HellaSwag
233
+ | Task |Version| Metric | Value | |Stderr|
234
+ |---------|------:|--------------------|---------|---|------|
235
+ |hellaswag| 1|acc,none | 0.71| | |
236
+ | | |acc_stderr,none | 0| | |
237
+ | | |acc_norm,none | 0.88| | |
238
+ | | |acc_norm_stderr,none| 0| | |
239
+ | | |alias |hellaswag| | |
240
+
241
+ Average: 88.01%
242
+
243
+
244
+ ### πŸ“š Citations
245
+
246
+ ```bibtex
247
+ @misc{kim2023solar,
248
+ title={SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling},
249
+ author={Dahyun Kim and Chanjun Park and Sanghoon Kim and Wonsung Lee and Wonho Song and Yunsu Kim and Hyeonwoo Kim and Yungi Kim and Hyeonju Lee and Jihoo Kim and Changbae Ahn and Seonghoon Yang and Sukyung Lee and Hyunbyung Park and Gyoungjin Gim and Mikyoung Cha and Hwalsuk Lee and Sunghun Kim},
250
+ year={2023},
251
+ eprint={2312.15166},
252
+ archivePrefix={arXiv},
253
+ primaryClass={cs.CL}
254
+ }
255
+ ```
256
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
257
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_macadeliccc__SOLAR-math-2x10.7b-v0.2)
258
+
259
+ | Metric |Value|
260
+ |---------------------------------|----:|
261
+ |Avg. |74.25|
262
+ |AI2 Reasoning Challenge (25-Shot)|70.90|
263
+ |HellaSwag (10-Shot) |88.29|
264
+ |MMLU (5-Shot) |66.25|
265
+ |TruthfulQA (0-shot) |71.68|
266
+ |Winogrande (5-shot) |83.50|
267
+ |GSM8k (5-shot) |64.90|
268
+
269
+
270
+