Update README.md
Browse files
README.md
CHANGED
@@ -24,15 +24,15 @@ Spaetzle-v60-7b is a merge of the following models
|
|
24 |
|
25 |
The performance looks ok so far: e.g. we get (for the GGUF q4) in EQ-Bench: Score (v2_de): 65.08 (Parseable: 171.0).
|
26 |
|
27 |
-
From [https://huggingface.co/spaces/Intel/low_bit_open_llm_leaderboard
|
28 |
|
29 |
-
| Type | Model
|
30 |
-
|
31 |
-
| π | Intel/SOLAR-10.7B-Instruct-v1.0-int4-inc
|
32 |
-
| π | **cstr/Spaetzle-v60-7b-int4-inc
|
33 |
-
| π· | TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF
|
34 |
-
| π· | cstr/Spaetzle-v60-7b-Q4_0-GGUF
|
35 |
-
| π | Intel/Mistral-7B-Instruct-v0.2-int4-inc
|
36 |
-
| π | Intel/Phi-3-mini-4k-instruct-int4-inc
|
37 |
-
| π· | TheBloke/Mistral-7B-Instruct-v0.2-GGUF
|
38 |
-
| π | Intel/Meta-Llama-3-8B-Instruct-int4-inc
|
|
|
24 |
|
25 |
The performance looks ok so far: e.g. we get (for the GGUF q4) in EQ-Bench: Score (v2_de): 65.08 (Parseable: 171.0).
|
26 |
|
27 |
+
From [Low-bit Quantized Open LLM Leaderboard](https://huggingface.co/spaces/Intel/low_bit_open_llm_leaderboard)
|
28 |
|
29 |
+
| Type | Model | Average β¬οΈ | ARC-c | ARC-e | Boolq | HellaSwag | Lambada | MMLU | Openbookqa | Piqa | Truthfulqa | Winogrande | #Params (B) | #Size (G) |
|
30 |
+
|------|-------------------------------------------|------------|-------|-------|-------|-----------|---------|-------|------------|-------|------------|------------|-------------|-----------|
|
31 |
+
| π | Intel/SOLAR-10.7B-Instruct-v1.0-int4-inc | 68.49 | 60.49 | 82.66 | 88.29 | 68.29 | 73.36 | 62.43 | 35.6 | 80.74 | 56.06 | 76.95 | 10.57 | 5.98 |
|
32 |
+
| π | **cstr/Spaetzle-v60-7b-int4-inc** | **68.01** | **62.12** | **85.27** | **87.34** | **66.43** | **70.58** | **61.39** | **37** | **82.26** | **50.18** | **77.51** | **7.04** | **4.16** |
|
33 |
+
| π· | TheBloke/SOLAR-10.7B-Instruct-v1.0-GGUF | 66.6 | 60.41 | 83.38 | 88.29 | 67.73 | 52.42 | 62.04 | 37.2 | 82.32 | 56.3 | 75.93 | 10.73 | 6.07 |
|
34 |
+
| π· | cstr/Spaetzle-v60-7b-Q4_0-GGUF | 66.44 | 61.35 | 85.19 | 87.98 | 66.54 | 52.78 | 62.05 | 40.6 | 81.72 | 47 | 79.16 | 7.24 | 4.11 |
|
35 |
+
| π | Intel/Mistral-7B-Instruct-v0.2-int4-inc | 65.73 | 55.38 | 81.44 | 85.26 | 65.67 | 70.89 | 58.66 | 34.2 | 80.74 | 51.16 | 73.95 | 7.04 | 4.16 |
|
36 |
+
| π | Intel/Phi-3-mini-4k-instruct-int4-inc | 65.09 | 57.08 | 83.33 | 86.18 | 59.45 | 68.14 | 66.62 | 38.6 | 79.33 | 38.68 | 73.48 | 3.66 | 2.28 |
|
37 |
+
| π· | TheBloke/Mistral-7B-Instruct-v0.2-GGUF | 63.52 | 53.5 | 77.9 | 85.44 | 66.9 | 50.11 | 58.45 | 38.8 | 77.58 | 53.12 | 73.4 | 7.24 | 4.11 |
|
38 |
+
| π | Intel/Meta-Llama-3-8B-Instruct-int4-inc | 62.93 | 51.88 | 81.1 | 83.21 | 57.09 | 71.32 | 62.41 | 35.2 | 78.62 | 36.35 | 72.14 | 7.2 | 5.4 |
|