Can anyone quantize this model?

#7
by todsj88 - opened

I really love this model but my private quantization is not fast enough.
Can anyone quantize this model by llama.cpp with better compression and speed?

I've queued this and the base model. You can see how it goes at http://hf.tst.eu/status.html

The quants should slowly appear at https://hf.tst.eu/model#Llama-3-Motif-102B-Instruct-GGUF

Thanks mradermacher!
I didnt expect to see you here. I have used lots of models you uploaded.

I didn't expect it either. I have no idea why your post showed up in my inbox, but it worked out after all :)

Thank you so much.
I already downloaded 6_k and 5_k_m models. I cannot wait testing them until night. Thanks again!!!

you are welcome - you can usually summon my attention to a model by asking mentioning @mradermacher in the post

todsj88 changed discussion status to closed
todsj88 changed discussion status to open

Sign up or log in to comment