fuzzy-mittenz commited on
Commit
ef2c1e3
·
verified ·
1 Parent(s): 1e15b0e

Update README.md

Browse files

![Screenshot 2024-12-18 at 09-00-31 Ideogram.png](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6593502ca2607099284523db%2FURO1u7bLeNetm8BSbX9H2.png)%3C!-- HTML_TAG_END -->

Files changed (1) hide show
  1. README.md +7 -27
README.md CHANGED
@@ -17,10 +17,14 @@ model-index:
17
  results: []
18
  ---
19
 
20
- # fuzzy-mittenz/miniclaus-qw1.5B-UNAMGS-Q8_0-GGUF
21
  This model was converted to GGUF format from [`fblgit/miniclaus-qw1.5B-UNAMGS`](https://huggingface.co/fblgit/miniclaus-qw1.5B-UNAMGS) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
22
  Refer to the [original model card](https://huggingface.co/fblgit/miniclaus-qw1.5B-UNAMGS) for more details on the model.
23
 
 
 
 
 
24
  ## Use with llama.cpp
25
  Install llama.cpp through brew (works on Mac and Linux)
26
 
@@ -32,31 +36,7 @@ Invoke the llama.cpp server or the CLI.
32
 
33
  ### CLI:
34
  ```bash
35
- llama-cli --hf-repo fuzzy-mittenz/miniclaus-qw1.5B-UNAMGS-Q8_0-GGUF --hf-file miniclaus-qw1.5b-unamgs-q8_0.gguf -p "The meaning to life and the universe is"
36
  ```
 
37
 
38
- ### Server:
39
- ```bash
40
- llama-server --hf-repo fuzzy-mittenz/miniclaus-qw1.5B-UNAMGS-Q8_0-GGUF --hf-file miniclaus-qw1.5b-unamgs-q8_0.gguf -c 2048
41
- ```
42
-
43
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
44
-
45
- Step 1: Clone llama.cpp from GitHub.
46
- ```
47
- git clone https://github.com/ggerganov/llama.cpp
48
- ```
49
-
50
- Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
51
- ```
52
- cd llama.cpp && LLAMA_CURL=1 make
53
- ```
54
-
55
- Step 3: Run inference through the main binary.
56
- ```
57
- ./llama-cli --hf-repo fuzzy-mittenz/miniclaus-qw1.5B-UNAMGS-Q8_0-GGUF --hf-file miniclaus-qw1.5b-unamgs-q8_0.gguf -p "The meaning to life and the universe is"
58
- ```
59
- or
60
- ```
61
- ./llama-server --hf-repo fuzzy-mittenz/miniclaus-qw1.5B-UNAMGS-Q8_0-GGUF --hf-file miniclaus-qw1.5b-unamgs-q8_0.gguf -c 2048
62
- ```
 
17
  results: []
18
  ---
19
 
20
+ # Still the best little guy for it's size, THANKS for the christmas present FBLGIT/miniclaus-qw1.5B-UNAMGS-Q8_0-GGUF
21
  This model was converted to GGUF format from [`fblgit/miniclaus-qw1.5B-UNAMGS`](https://huggingface.co/fblgit/miniclaus-qw1.5B-UNAMGS) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
22
  Refer to the [original model card](https://huggingface.co/fblgit/miniclaus-qw1.5B-UNAMGS) for more details on the model.
23
 
24
+
25
+ ![Screenshot 2024-12-18 at 09-00-31 Ideogram.png](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/URO1u7bLeNetm8BSbX9H2.png)
26
+
27
+
28
  ## Use with llama.cpp
29
  Install llama.cpp through brew (works on Mac and Linux)
30
 
 
36
 
37
  ### CLI:
38
  ```bash
39
+ llama-cli --hf-repo Intelligentestate/fblgit_miniclaus-qw1.5B-UNAMGS-Q8_0-GGUF --hf-file miniclaus-qw1.5b-unamgs-q8_0.gguf -p "The meaning to life and the universe is"
40
  ```
41
+ ### GPT4ALL/Ollama: use standard qwen templates/prompting opening context window for length
42