Getting error, Model type 'mistral' is not supported.
#3
by
wehapi
- opened
Here's the code
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("OpenHermes-2.5-Mistral-7B-GGUF", model_file="openhermes-2.5-mistral-7b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
Here's the error
Model type 'mistral' is not supported.
Traceback (most recent call last):
File "/home/ubuntu/projects/nlprocessing/huggingface/models/message.py", line 13, in <module>
llm = AutoModelForCausalLM.from_pretrained("OpenHermes-2.5-Mistral-7B-GGUF", model_file="openhermes-2.5-mistral-7b.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ubuntu/projects/nlprocessing/huggingface/env/lib/python3.11/site-packages/ctransformers/hub.py", line 175, in from_pretrained
llm = LLM(
^^^^
File "/home/ubuntu/projects/nlprocessing/huggingface/env/lib/python3.11/site-packages/ctransformers/llm.py", line 253, in __init__
raise RuntimeError(
RuntimeError: Failed to create LLM 'mistral' from '/home/ubuntu/projects/nlprocessing/huggingface/models/OpenHermes-2.5-Mistral-7B-GGUF/openhermes-2.5-mistral-7b.Q4_K_M.gguf'.
Tried to set gpu_layers
to 0
but not working.
I strongly advise against using ctransformers anymore. It hasn't been updated since April, and Mistral came out in September. It doesn't have support, so you won't be able to use it. Look at llama-cpp-python for something still frequently maintained.
wehapi
changed discussion status to
closed