apepkuss79 commited on
Commit
2ca492f
·
verified ·
1 Parent(s): a115428

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +16 -16
README.md CHANGED
@@ -28,9 +28,7 @@ tags:
28
 
29
  ## Run with LlamaEdge
30
 
31
- - LlamaEdge version: coming soon
32
-
33
- <!-- - LlamaEdge version: [v0.14.3](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.14.3) -->
34
 
35
  - Prompt template
36
 
@@ -67,22 +65,24 @@ tags:
67
  --ctx-size 32000
68
  ```
69
 
70
- <!-- ## Quantized GGUF Models
71
 
72
  | Name | Quant method | Bits | Size | Use case |
73
  | ---- | ---- | ---- | ---- | ----- |
74
- | [QwQ-32B-Preview-Q2_K.gguf](https://huggingface.co/second-state/QwQ-32B-Preview-GGUF/blob/main/QwQ-32B-Preview-Q2_K.gguf) | Q2_K | 2 | 5.77 GB| smallest, significant quality loss - not recommended for most purposes |
75
- | [QwQ-32B-Preview-Q3_K_L.gguf](https://huggingface.co/second-state/QwQ-32B-Preview-GGUF/blob/main/QwQ-32B-Preview-Q3_K_L.gguf) | Q3_K_L | 3 | 7.92 GB| small, substantial quality loss |
76
- | [QwQ-32B-Preview-Q3_K_M.gguf](https://huggingface.co/second-state/QwQ-32B-Preview-GGUF/blob/main/QwQ-32B-Preview-Q3_K_M.gguf) | Q3_K_M | 3 | 7.34 GB| very small, high quality loss |
77
- | [QwQ-32B-Preview-Q3_K_S.gguf](https://huggingface.co/second-state/QwQ-32B-Preview-GGUF/blob/main/QwQ-32B-Preview-Q3_K_S.gguf) | Q3_K_S | 3 | 6.66 GB| very small, high quality loss |
78
- | [QwQ-32B-Preview-Q4_0.gguf](https://huggingface.co/second-state/QwQ-32B-Preview-GGUF/blob/main/QwQ-32B-Preview-Q4_0.gguf) | Q4_0 | 4 | 8.52 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
79
- | [QwQ-32B-Preview-Q4_K_M.gguf](https://huggingface.co/second-state/QwQ-32B-Preview-GGUF/blob/main/QwQ-32B-Preview-Q4_K_M.gguf) | Q4_K_M | 4 | 8.99 GB| medium, balanced quality - recommended |
80
- | [QwQ-32B-Preview-Q4_K_S.gguf](https://huggingface.co/second-state/QwQ-32B-Preview-GGUF/blob/main/QwQ-32B-Preview-Q4_K_S.gguf) | Q4_K_S | 4 | 8.57 GB| small, greater quality loss |
81
- | [QwQ-32B-Preview-Q5_0.gguf](https://huggingface.co/second-state/QwQ-32B-Preview-GGUF/blob/main/QwQ-32B-Preview-Q5_0.gguf) | Q5_0 | 5 | 10.3 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
82
  | [QwQ-32B-Preview-Q5_K_M.gguf](https://huggingface.co/second-state/QwQ-32B-Preview-GGUF/blob/main/QwQ-32B-Preview-Q5_K_M.gguf) | Q5_K_M | 5 | 23.3 GB| large, very low quality loss - recommended |
83
- | [QwQ-32B-Preview-Q5_K_S.gguf](https://huggingface.co/second-state/QwQ-32B-Preview-GGUF/blob/main/QwQ-32B-Preview-Q5_K_S.gguf) | Q5_K_S | 5 | 10.3 GB| large, low quality loss - recommended |
84
- | [QwQ-32B-Preview-Q6_K.gguf](https://huggingface.co/second-state/QwQ-32B-Preview-GGUF/blob/main/QwQ-32B-Preview-Q6_K.gguf) | Q6_K | 6 | 12.1 GB| very large, extremely low quality loss |
85
- | [QwQ-32B-Preview-Q8_0.gguf](https://huggingface.co/second-state/QwQ-32B-Preview-GGUF/blob/main/QwQ-32B-Preview-Q8_0.gguf) | Q8_0 | 8 | 15.1 GB| very large, extremely low quality loss - not recommended |
86
- | [QwQ-32B-Preview-f16.gguf](https://huggingface.co/second-state/QwQ-32B-Preview-GGUF/blob/main/QwQ-32B-Preview-f16.gguf) | f16 | 16 | 29.5 GB| | -->
 
 
87
 
88
  *Quantized with llama.cpp b4120*
 
28
 
29
  ## Run with LlamaEdge
30
 
31
+ - LlamaEdge version: [v0.14.16](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.14.16)
 
 
32
 
33
  - Prompt template
34
 
 
65
  --ctx-size 32000
66
  ```
67
 
68
+ ## Quantized GGUF Models
69
 
70
  | Name | Quant method | Bits | Size | Use case |
71
  | ---- | ---- | ---- | ---- | ----- |
72
+ | [QwQ-32B-Preview-Q2_K.gguf](https://huggingface.co/second-state/QwQ-32B-Preview-GGUF/blob/main/QwQ-32B-Preview-Q2_K.gguf) | Q2_K | 2 | 12.3 GB| smallest, significant quality loss - not recommended for most purposes |
73
+ | [QwQ-32B-Preview-Q3_K_L.gguf](https://huggingface.co/second-state/QwQ-32B-Preview-GGUF/blob/main/QwQ-32B-Preview-Q3_K_L.gguf) | Q3_K_L | 3 | 17.2 GB| small, substantial quality loss |
74
+ | [QwQ-32B-Preview-Q3_K_M.gguf](https://huggingface.co/second-state/QwQ-32B-Preview-GGUF/blob/main/QwQ-32B-Preview-Q3_K_M.gguf) | Q3_K_M | 3 | 15.9 GB| very small, high quality loss |
75
+ | [QwQ-32B-Preview-Q3_K_S.gguf](https://huggingface.co/second-state/QwQ-32B-Preview-GGUF/blob/main/QwQ-32B-Preview-Q3_K_S.gguf) | Q3_K_S | 3 | 14.4 GB| very small, high quality loss |
76
+ | [QwQ-32B-Preview-Q4_0.gguf](https://huggingface.co/second-state/QwQ-32B-Preview-GGUF/blob/main/QwQ-32B-Preview-Q4_0.gguf) | Q4_0 | 4 | 18.6 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
77
+ | [QwQ-32B-Preview-Q4_K_M.gguf](https://huggingface.co/second-state/QwQ-32B-Preview-GGUF/blob/main/QwQ-32B-Preview-Q4_K_M.gguf) | Q4_K_M | 4 | 19.9 GB| medium, balanced quality - recommended |
78
+ | [QwQ-32B-Preview-Q4_K_S.gguf](https://huggingface.co/second-state/QwQ-32B-Preview-GGUF/blob/main/QwQ-32B-Preview-Q4_K_S.gguf) | Q4_K_S | 4 | 18.8 GB| small, greater quality loss |
79
+ | [QwQ-32B-Preview-Q5_0.gguf](https://huggingface.co/second-state/QwQ-32B-Preview-GGUF/blob/main/QwQ-32B-Preview-Q5_0.gguf) | Q5_0 | 5 | 22.6 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
80
  | [QwQ-32B-Preview-Q5_K_M.gguf](https://huggingface.co/second-state/QwQ-32B-Preview-GGUF/blob/main/QwQ-32B-Preview-Q5_K_M.gguf) | Q5_K_M | 5 | 23.3 GB| large, very low quality loss - recommended |
81
+ | [QwQ-32B-Preview-Q5_K_S.gguf](https://huggingface.co/second-state/QwQ-32B-Preview-GGUF/blob/main/QwQ-32B-Preview-Q5_K_S.gguf) | Q5_K_S | 5 | 22.6 GB| large, low quality loss - recommended |
82
+ | [QwQ-32B-Preview-Q6_K.gguf](https://huggingface.co/second-state/QwQ-32B-Preview-GGUF/blob/main/QwQ-32B-Preview-Q6_K.gguf) | Q6_K | 6 | 26.9 GB| very large, extremely low quality loss |
83
+ | [QwQ-32B-Preview-Q8_0.gguf](https://huggingface.co/second-state/QwQ-32B-Preview-GGUF/blob/main/QwQ-32B-Preview-Q8_0.gguf) | Q8_0 | 8 | 34.8 GB| very large, extremely low quality loss - not recommended |
84
+ | [QwQ-32B-Preview-f16-00001-of-00003.gguf](https://huggingface.co/second-state/QwQ-32B-Preview-GGUF/blob/main/QwQ-32B-Preview-f16-00001-of-00003.gguf) | f16 | 16 | 29.8 GB| |
85
+ | [QwQ-32B-Preview-f16-00002-of-00003.gguf](https://huggingface.co/second-state/QwQ-32B-Preview-GGUF/blob/main/QwQ-32B-Preview-f16-00002-of-00003.gguf) | f16 | 16 | 29.8 GB| |
86
+ | [QwQ-32B-Preview-f16-00003-of-00003.gguf](https://huggingface.co/second-state/QwQ-32B-Preview-GGUF/blob/main/QwQ-32B-Preview-f16-00003-of-00003.gguf) | f16 | 16 | 5.87 GB| |
87
 
88
  *Quantized with llama.cpp b4120*