Please quantize base model too

#3
by Delta36652 - opened

DeepSeek-V3 base seems to be particularly interersting to try as no provider serves it.

I currently have the server busy doing the imatrix for the lower bit quants, but will get to it if its not available (bartowski et al) by then

@bullerwins
IQ2_M is the most interesting one.(There isn't any reason to go below IQ2_M) Ironically, Q5K_M and Q4K_M also benefit from imatrix, but you statically quanted them first. If you were to calculate the imatrix first, the perplexity of the near perfect Q5K_M would have been better. Oh well.

@bullerwins
IQ2M is the most interesting one. Ironically, Q5K_M and Q4K_M also benefit from imatrix, but you statically quanted them first. If you were to calculate the imatrix first, the perplexity of the near perfect Q5K_M would have been better. Oh well.

IQ2M would be really interesting yeah, it's the one most people would be able to run and provide best bang for the buck performance. The imatrix takes a long time so I wanted to make the static versions first. I'll reupload them once i have the importance matrix.

@whatever1983 2bit may be too dumb

seconding this, please do base model!

Sign up or log in to comment