File size: 1,455 Bytes
1e15b0e ef2c1e3 1e15b0e ef2c1e3 1e15b0e ef2c1e3 1e15b0e ef2c1e3 1e15b0e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
---
language:
- en
license: other
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
library_name: transformers
tags:
- generated_from_trainer
- llama-cpp
- gguf-my-repo
base_model: fblgit/miniclaus-qw1.5B-UNAMGS
datasets:
- Magpie-Align/Magpie-Pro-MT-300K-v0.1
model-index:
- name: miniclaus-qw1.5B-UNAMGS
results: []
---
# Still the best little guy for it's size, THANKS for the christmas present FBLGIT/miniclaus-qw1.5B-UNAMGS-Q8_0-GGUF
This model was converted to GGUF format from [`fblgit/miniclaus-qw1.5B-UNAMGS`](https://huggingface.co/fblgit/miniclaus-qw1.5B-UNAMGS) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/fblgit/miniclaus-qw1.5B-UNAMGS) for more details on the model.
![Screenshot 2024-12-18 at 09-00-31 Ideogram.png](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6593502ca2607099284523db%2FURO1u7bLeNetm8BSbX9H2.png%3C%2Fspan%3E)%3C%2Fspan%3E%3C%2Fspan%3E
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Intelligentestate/fblgit_miniclaus-qw1.5B-UNAMGS-Q8_0-GGUF --hf-file miniclaus-qw1.5b-unamgs-q8_0.gguf -p "The meaning to life and the universe is"
```
### GPT4ALL/Ollama: use standard qwen templates/prompting opening context window for length
|