tokenizer.model
Hi, I'm trying to quantize this model with llama.cpp but it complains that tokenizer.model is missing so I took the file from https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2. Would this negatively affect anything?
no different, should be ok
Just quantized it to Q5_K_M, seems to work great. Thanks for making this model available!
Wylo, would it possible to upload your quantized model? I have lack of available free space to test this. Thanks in advance, and thanks to huseinzol and the team!
@prsyahmi I uploaded it here https://huggingface.co/Wylo/malaysian-mistral-7b-32k-instructions-v3.5-GGUF. I didn't test it so hopefully it works.
Released v4 version, https://huggingface.co/mesolitica/malaysian-mistral-7b-32k-instructions-v4