---
tags:
- merge
- gguf
- not-for-all-audiences
- storywriting
- text adventure
- iMat
---
# maid-yuzu-v8-alter-iMat-GGUF
Update: Legacy quants calculated with imatrix showed lower average divergence than expected when compared to their non-imat variants. Uploading those now as well.
Highly requested model. Quantized from fp16 with love.
* 1st batch (IQ3_S, IQ3_XS) use a imatrix.dat file calculated from Q8 quant. These have been removed in favor of a newer method. Please see tables below.
* Later files made using .imatrix file from [this](https://huggingface.co/datasets/ikawrakow/imatrix-from-wiki-train) repo (special thanks to [ikawrakow](https://huggingface.co/ikawrakow) again)