--- language: - en license: other license_name: qwen license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE library_name: transformers tags: - generated_from_trainer - llama-cpp - gguf-my-repo base_model: fblgit/miniclaus-qw1.5B-UNAMGS datasets: - Magpie-Align/Magpie-Pro-MT-300K-v0.1 model-index: - name: miniclaus-qw1.5B-UNAMGS results: [] --- # Still the best little guy for it's size, THANKS for the christmas present FBLGIT/miniclaus-qw1.5B-UNAMGS-Q8_0-GGUF This model was converted to GGUF format from [`fblgit/miniclaus-qw1.5B-UNAMGS`](https://huggingface.co/fblgit/miniclaus-qw1.5B-UNAMGS) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/fblgit/miniclaus-qw1.5B-UNAMGS) for more details on the model. ![Screenshot 2024-12-18 at 09-00-31 Ideogram.png](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/URO1u7bLeNetm8BSbX9H2.png) ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo Intelligentestate/fblgit_miniclaus-qw1.5B-UNAMGS-Q8_0-GGUF --hf-file miniclaus-qw1.5b-unamgs-q8_0.gguf -p "The meaning to life and the universe is" ``` ### GPT4ALL/Ollama: use standard qwen templates/prompting opening context window for length