fuzzy-mittenz
commited on
Update README.md
Browse files![2b13cf8d-79b3-46e7-83b5-7e7290cc6307.jpg](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6593502ca2607099284523db%2FwVJJxU_s2QTLU0W5IOpK0.jpeg)%3C!-- HTML_TAG_END -->
README.md
CHANGED
@@ -14,11 +14,17 @@ tags:
|
|
14 |
- qwen
|
15 |
- qwen-coder
|
16 |
- llama-cpp
|
17 |
-
|
|
|
18 |
---
|
19 |
|
20 |
-
#
|
21 |
-
|
|
|
|
|
|
|
|
|
|
|
22 |
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct) for more details on the model.
|
23 |
|
24 |
## Use with llama.cpp
|
@@ -28,35 +34,4 @@ Install llama.cpp through brew (works on Mac and Linux)
|
|
28 |
brew install llama.cpp
|
29 |
|
30 |
```
|
31 |
-
Invoke the llama.cpp server or the CLI.
|
32 |
-
|
33 |
-
### CLI:
|
34 |
-
```bash
|
35 |
-
llama-cli --hf-repo fuzzy-mittenz/Qwen2.5-Coder-3B-Instruct-Q8_0-GGUF --hf-file qwen2.5-coder-3b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
|
36 |
-
```
|
37 |
-
|
38 |
-
### Server:
|
39 |
-
```bash
|
40 |
-
llama-server --hf-repo fuzzy-mittenz/Qwen2.5-Coder-3B-Instruct-Q8_0-GGUF --hf-file qwen2.5-coder-3b-instruct-q8_0.gguf -c 2048
|
41 |
-
```
|
42 |
-
|
43 |
-
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
44 |
-
|
45 |
-
Step 1: Clone llama.cpp from GitHub.
|
46 |
-
```
|
47 |
-
git clone https://github.com/ggerganov/llama.cpp
|
48 |
-
```
|
49 |
-
|
50 |
-
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
|
51 |
-
```
|
52 |
-
cd llama.cpp && LLAMA_CURL=1 make
|
53 |
-
```
|
54 |
-
|
55 |
-
Step 3: Run inference through the main binary.
|
56 |
-
```
|
57 |
-
./llama-cli --hf-repo fuzzy-mittenz/Qwen2.5-Coder-3B-Instruct-Q8_0-GGUF --hf-file qwen2.5-coder-3b-instruct-q8_0.gguf -p "The meaning to life and the universe is"
|
58 |
-
```
|
59 |
-
or
|
60 |
-
```
|
61 |
-
./llama-server --hf-repo fuzzy-mittenz/Qwen2.5-Coder-3B-Instruct-Q8_0-GGUF --hf-file qwen2.5-coder-3b-instruct-q8_0.gguf -c 2048
|
62 |
-
```
|
|
|
14 |
- qwen
|
15 |
- qwen-coder
|
16 |
- llama-cpp
|
17 |
+
datasets:
|
18 |
+
- IntelligentEstate/The_Key
|
19 |
---
|
20 |
|
21 |
+
# IntelligentEstate/Replicant_Operator_ed-Q2-iQ8_0.gguf
|
22 |
+
For those who need more power
|
23 |
+
|
24 |
+
![2b13cf8d-79b3-46e7-83b5-7e7290cc6307.jpg](https://cdn-uploads.huggingface.co/production/uploads/6593502ca2607099284523db/wVJJxU_s2QTLU0W5IOpK0.jpeg)
|
25 |
+
|
26 |
+
|
27 |
+
This model was converted to GGUF format from [`Qwen/Qwen2.5-Coder-3B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct) using llama.cpp
|
28 |
Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-Coder-3B-Instruct) for more details on the model.
|
29 |
|
30 |
## Use with llama.cpp
|
|
|
34 |
brew install llama.cpp
|
35 |
|
36 |
```
|
37 |
+
Invoke the llama.cpp server or the CLI.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|