ThomasBaruzier
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -233,6 +233,41 @@ All quants were made using the imatrix option and Bartowski's [calibration file]
|
|
233 |
|
234 |
<hr>
|
235 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
236 |
## Model Information
|
237 |
|
238 |
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
|
|
|
233 |
|
234 |
<hr>
|
235 |
|
236 |
+
# Perplexity table (the lower the better)
|
237 |
+
|
238 |
+
| Quant | Size (MB) | PPL | Size (%) | Accuracy (%) | PPL error rate |
|
239 |
+
| ------- | --------- | -------- | -------- | ------------ | -------------- |
|
240 |
+
| IQ1_S | 376 | 771.8958 | 15.9 | 1.78 | 14.99148 |
|
241 |
+
| IQ1_M | 395 | 162.0038 | 16.7 | 8.46 | 2.86547 |
|
242 |
+
| IQ2_XXS | 427 | 46.0426 | 18.05 | 29.78 | 0.77657 |
|
243 |
+
| IQ2_XS | 454 | 30.7626 | 19.2 | 44.58 | 0.50736 |
|
244 |
+
| IQ2_S | 467 | 25.4944 | 19.75 | 53.79 | 0.4194 |
|
245 |
+
| IQ2_M | 492 | 21.1112 | 20.8 | 64.95 | 0.34245 |
|
246 |
+
| Q2_K_S | 529 | 24.5117 | 22.37 | 55.94 | 0.40072 |
|
247 |
+
| IQ3_XXS | 537 | 17.2479 | 22.71 | 79.5 | 0.27837 |
|
248 |
+
| Q2_K | 554 | 26.1688 | 23.42 | 52.4 | 0.44789 |
|
249 |
+
| IQ3_XS | 593 | 16.0104 | 25.07 | 85.65 | 0.25685 |
|
250 |
+
| Q3_K_S | 612 | 19.1038 | 25.88 | 71.78 | 0.3166 |
|
251 |
+
| IQ3_S | 615 | 15.6453 | 26 | 87.65 | 0.24806 |
|
252 |
+
| IQ3_M | 627 | 15.4512 | 26.51 | 88.75 | 0.24445 |
|
253 |
+
| Q3_K_M | 659 | 14.9 | 27.86 | 92.03 | 0.23958 |
|
254 |
+
| Q3_K_L | 699 | 14.7286 | 29.56 | 93.1 | 0.23679 |
|
255 |
+
| IQ4_XS | 709 | 14.1783 | 29.98 | 96.72 | 0.22704 |
|
256 |
+
| IQ4_NL | 738 | 14.1777 | 31.21 | 96.72 | 0.22727 |
|
257 |
+
| Q4_0 | 738 | 14.4071 | 31.21 | 95.18 | 0.23021 |
|
258 |
+
| Q4_K_S | 740 | 14.0726 | 31.29 | 97.44 | 0.22511 |
|
259 |
+
| Q4_K_M | 771 | 14.0496 | 32.6 | 97.6 | 0.22523 |
|
260 |
+
| Q4_1 | 794 | 14.1039 | 33.57 | 97.23 | 0.22552 |
|
261 |
+
| Q5_K_S | 852 | 13.8515 | 36.03 | 99 | 0.22187 |
|
262 |
+
| Q5_0 | 854 | 13.8766 | 36.11 | 98.82 | 0.2221 |
|
263 |
+
| Q5_K_M | 870 | 13.8295 | 36.79 | 99.15 | 0.22162 |
|
264 |
+
| Q5_1 | 910 | 13.7981 | 38.48 | 99.38 | 0.22042 |
|
265 |
+
| Q6_K | 975 | 13.7604 | 41.23 | 99.65 | 0.22054 |
|
266 |
+
| Q8_0 | 1260 | 13.7166 | 53.28 | 99.97 | 0.21964 |
|
267 |
+
| F16 | 2365 | 13.7126 | 100 | 100 | 0.21966 |
|
268 |
+
|
269 |
+
<hr>
|
270 |
+
|
271 |
## Model Information
|
272 |
|
273 |
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks.
|