qwp4w3hyb commited on
Commit
f790d81
·
verified ·
1 Parent(s): 9659306

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -30,6 +30,9 @@ base_model: meta-llama/Meta-Llama-3.1-70B-Instruct
30
  - quants done with an importance matrix for improved quantization loss
31
  - Quantized ggufs & imatrix from hf bf16, through bf16. `safetensors bf16 -> gguf bf16 -> quant` for *optimal* quant loss.
32
  - Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S
 
 
 
33
  - Imatrix generated with [this](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) multi-purpose dataset by [bartowski](https://huggingface.co/bartowski).
34
  ```
35
  ./imatrix -m $model_name-bf16.gguf -f calibration_datav3.txt -o $model_name.imatrix
 
30
  - quants done with an importance matrix for improved quantization loss
31
  - Quantized ggufs & imatrix from hf bf16, through bf16. `safetensors bf16 -> gguf bf16 -> quant` for *optimal* quant loss.
32
  - Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S
33
+ - still WIP
34
+ - experimental custom quant types
35
+ - `_L` with `--output-tensor-type f16 --token-embedding-type f16`, which have supposedly better accuracy.
36
  - Imatrix generated with [this](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) multi-purpose dataset by [bartowski](https://huggingface.co/bartowski).
37
  ```
38
  ./imatrix -m $model_name-bf16.gguf -f calibration_datav3.txt -o $model_name.imatrix