--- license: mit language: - en base_model: meditsolutions/MedIT-Mesh-3B-Instruct tags: - llama-cpp - gguf-my-repo --- # IntelligentEstate/MedIT-Mesh-3B-Instruct-Q8_0-GGUF Model for swar/edge use Multi check Quant training for the most Ideal and coherent model This model was converted to GGUF format from [`meditsolutions/MedIT-Mesh-3B-Instruct`](https://huggingface.co/meditsolutions/MedIT-Mesh-3B-Instruct) using llama.cpp Refer to the [original model card](https://huggingface.co/meditsolutions/MedIT-Mesh-3B-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo IntelligentEstate/MedIT-Mesh-3B-Instruct-Q8_0-GGUF --hf-file medit-mesh-3b-instruct-q8_0.gguf -p "The meaning to life and the universe is" ```