Error running on llama cpp python

#7
by celsowm - opened
chat_handler = Llava16ChatHandler(clip_model_path="llms/mmproj-model-f16.gguf")
llm = Llama(
  model_path="llms/ggml-model-Q4_K_M.gguf",
  chat_handler=chat_handler,
  n_gpu_layers=-1, 
  n_ctx=2048, 
)

I've got this error:

clip_model_load: loaded meta data with 18 key-value pairs and 455 tensors from llms/mmproj-model-f16.gguf
clip_model_load: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
clip_model_load: - kv 0: general.architecture str = clip
clip_model_load: - kv 1: clip.has_text_encoder bool = false
clip_model_load: - kv 2: clip.has_vision_encoder bool = true
clip_model_load: - kv 3: clip.has_minicpmv_projector bool = true
clip_model_load: - kv 4: general.file_type u32 = 1
clip_model_load: - kv 5: general.description str = image encoder for MiniCPM-V
clip_model_load: - kv 6: clip.projector_type str = resampler
clip_model_load: - kv 7: clip.vision.image_size u32 = 448
clip_model_load: - kv 8: clip.vision.patch_size u32 = 14
clip_model_load: - kv 9: clip.vision.embedding_length u32 = 1152
clip_model_load: - kv 10: clip.vision.feed_forward_length u32 = 4304
clip_model_load: - kv 11: clip.vision.projection_dim u32 = 0
clip_model_load: - kv 12: clip.vision.attention.head_count u32 = 16
clip_model_load: - kv 13: clip.vision.attention.layer_norm_epsilon f32 = 0.000001
clip_model_load: - kv 14: clip.vision.block_count u32 = 26
clip_model_load: - kv 15: clip.vision.image_mean arr[f32,3] = [0.500000, 0.500000, 0.500000]
clip_model_load: - kv 16: clip.vision.image_std arr[f32,3] = [0.500000, 0.500000, 0.500000]
clip_model_load: - kv 17: clip.use_gelu bool = true
clip_model_load: - type f32: 285 tensors
clip_model_load: - type f16: 170 tensors
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 3060, compute capability 8.6, VMM: yes
clip_model_load: CLIP using CUDA backend
/tmp/pip-install-n6j_fj7f/llama-cpp-python_f85648692d0b48e6b8979fe4d1ed70ae/vendor/llama.cpp/examples/llava/clip.cpp:1032: GGML_ASSERT(new_clip->has_llava_projector) failed
/home/celso/.local/lib/python3.12/site-packages/llama_cpp/lib/libggml.so(+0x344ab)[0x76b6a86344ab]
/home/celso/.local/lib/python3.12/site-packages/llama_cpp/lib/libggml.so(ggml_abort+0x163)[0x76b6a86360e3]
/home/celso/.local/lib/python3.12/site-packages/llama_cpp/lib/libllava.so(clip_model_load+0x5c4b)[0x76b6a7f6b75b]
/lib/x86_64-linux-gnu/libffi.so.8(+0x7b16)[0x76b6b981eb16]
/lib/x86_64-linux-gnu/libffi.so.8(+0x43ef)[0x76b6b981b3ef]
/lib/x86_64-linux-gnu/libffi.so.8(ffi_call+0x12e)[0x76b6b981e0be]
/usr/lib/python3.12/lib-dynload/_ctypes.cpython-312-x86_64-linux-gnu.so(+0xe11c)[0x76b6b983111c]
/usr/lib/python3.12/lib-dynload/_ctypes.cpython-312-x86_64-linux-gnu.so(+0x92af)[0x76b6b982c2af]
python3(_PyObject_MakeTpCall+0x75)[0x548f55]
python3(_PyEval_EvalFrameDefault+0xa89)[0x5d7499]
python3(_PyObject_Call_Prepend+0x18a)[0x54a86a]
python3[0x59dfef]
python3[0x599ab3]
python3(_PyObject_MakeTpCall+0x13e)[0x54901e]
python3(_PyEval_EvalFrameDefault+0xa89)[0x5d7499]
python3(PyEval_EvalCode+0x15b)[0x5d59ab]
python3[0x608ac2]
python3[0x6b4d83]
python3(_PyRun_SimpleFileObject+0x1aa)[0x6b4aea]
python3(_PyRun_AnyFileObject+0x4f)[0x6b491f]
python3(Py_RunMain+0x3b5)[0x6bc9c5]
python3(Py_BytesMain+0x2d)[0x6bc4ad]
/lib/x86_64-linux-gnu/libc.so.6(+0x2a1ca)[0x76b6b962a1ca]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x8b)[0x76b6b962a28b]
python3(_start+0x25)[0x657925]
Aborted (core dumped)

Sign up or log in to comment