https://huggingface.co/sophosympatheia/Midnight-Miqu-103B-v1.0
#3
by
afran
- opened
This model looks promising. I would greatly appreciate the larger quants up to Q6_K (with or without weighted/imatrix quantization), suitable for running this at 96 GiB VRAM.
By the way, thanks for the great work. I cannot express how much I appreciate your efforts!
It's in the queue (may take a week). I'll likely do both imatrix and static ones, although it seems this guy might do some, too: https://huggingface.co/Dracones/Midnight-Miqu-103B-v1.0-GGUF
https://huggingface.co/mradermacher/Midnight-Miqu-103B-v1.0-i1-GGUF is complete now (actually for days already - cleaning up), and static ones are also available in the non-i1 repo.
mradermacher
changed discussion status to
closed