Can anyone quantize this model?
I really love this model but my private quantization is not fast enough.
Can anyone quantize this model by llama.cpp with better compression and speed?
I've queued this and the base model. You can see how it goes at http://hf.tst.eu/status.html
The quants should slowly appear at https://hf.tst.eu/model#Llama-3-Motif-102B-Instruct-GGUF
Thanks mradermacher!
I didnt expect to see you here. I have used lots of models you uploaded.
I didn't expect it either. I have no idea why your post showed up in my inbox, but it worked out after all :)
Thank you so much.
I already downloaded 6_k and 5_k_m models. I cannot wait testing them until night. Thanks again!!!
you are welcome - you can usually summon my attention to a model by asking mentioning @mradermacher in the post