How to use this with llama-cpp or ollama to create image embeddings?

#1
by tisu1902 - opened

First of all, thanks for the quant! I'm trying to use this to create image embedding with llama-cpp (python binding) or ollama but don't know how.

That's likely because the extra files for vision tasks are not provided by us, we only provide the llm portion. I have it somewhere on my todo list to also provide other files automatically, but this is very model specific, and they usually do not need quantizing, only extraction with the model-specific code. Normally, they are easy to come by, but in this case... not.

@nicoboss do you know how to extract these files? I could potentially do this as part of quantization or conversion.

Sign up or log in to comment