InferenceIllusionist commited on
Commit
8c80c75
·
verified ·
1 Parent(s): 412ec49

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -4
README.md CHANGED
@@ -9,16 +9,18 @@ tags:
9
 
10
  # maid-yuzu-v8-alter-iMat-GGUF
11
 
12
- <b>Highly requested model.</b> Quantized from fp16 with love.
13
 
14
- * <s>1st batch (IQ3_S, IQ3_XS) use a imatrix.dat file calculated from Q8 quant using an input file from [this discussion](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384)</s> These have been removed in favor of later method. Please see tables below.
 
15
  * Later files made using .imatrix file from [this](https://huggingface.co/datasets/ikawrakow/imatrix-from-wiki-train) repo (special thanks to [ikawrakow](https://huggingface.co/ikawrakow) again)
16
 
17
- <h3>Original IQ3_XS KL Divergence (calculated vs Q8)</h3>
18
  <img src="https://huggingface.co/InferenceIllusionist/maid-yuzu-v8-alter-iMat-GGUF/resolve/main/IQ3_XS_OLD_KL.JPG?download=true" width="300"/>
19
 
20
- <h3>Updated IQ3_XS KL Divergence (calculated vs Q8)</h3>
21
  <img src="https://huggingface.co/InferenceIllusionist/maid-yuzu-v8-alter-iMat-GGUF/resolve/main/IQ3_XS_NEW_KL.JPG?download=true" width="300"/>
 
22
 
23
  For a brief rundown of iMatrix quant performance please see this [PR](https://github.com/ggerganov/llama.cpp/pull/5747)
24
 
 
9
 
10
  # maid-yuzu-v8-alter-iMat-GGUF
11
 
12
+ <b>Update:</b> Legacy quants calculated with imatrix showed lower average divergence than expected when compared to their non-imat variants. Uploading those now as well.
13
 
14
+ <b>Highly requested model.</b> Quantized from fp16 with love.
15
+ * <s>1st batch (IQ3_S, IQ3_XS) use a imatrix.dat file calculated from Q8 quant.</s> These have been removed in favor of a newer method. Please see tables below.
16
  * Later files made using .imatrix file from [this](https://huggingface.co/datasets/ikawrakow/imatrix-from-wiki-train) repo (special thanks to [ikawrakow](https://huggingface.co/ikawrakow) again)
17
 
18
+ <h3>Original IQ3_XS KL-Divergence (calculated vs Q8)</h3>
19
  <img src="https://huggingface.co/InferenceIllusionist/maid-yuzu-v8-alter-iMat-GGUF/resolve/main/IQ3_XS_OLD_KL.JPG?download=true" width="300"/>
20
 
21
+ <h3>Updated IQ3_XS KL-Divergence (calculated vs Q8)</h3>
22
  <img src="https://huggingface.co/InferenceIllusionist/maid-yuzu-v8-alter-iMat-GGUF/resolve/main/IQ3_XS_NEW_KL.JPG?download=true" width="300"/>
23
+ Lower numbers are better
24
 
25
  For a brief rundown of iMatrix quant performance please see this [PR](https://github.com/ggerganov/llama.cpp/pull/5747)
26