roleplaiapp commited on
Commit
e6633b8
·
verified ·
1 Parent(s): 3deaae9

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +46 -0
README.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license_link: https://huggingface.co/Qwen/QwQ-32B-Preview/blob/main/LICENSE
3
+ language:
4
+ - en
5
+ base_model: Qwen/Qwen2.5-32B-Instruct
6
+ tags:
7
+ - llama-cpp
8
+ - QwQ-32B-Preview
9
+ - gguf
10
+ - Q4_K_M
11
+ - 32b
12
+ - QwQ
13
+ - qwen-2
14
+ - llama-cpp
15
+ - Qwen
16
+ - code
17
+ - math
18
+ - chat
19
+ - roleplay
20
+ - text-generation
21
+ - safetensors
22
+ - nlp
23
+ - code
24
+ library_name: transformers
25
+ pipeline_tag: text-generation
26
+ ---
27
+
28
+ # roleplaiapp/QwQ-32B-Preview-Q4_K_M-GGUF
29
+
30
+ **Repo:** `roleplaiapp/QwQ-32B-Preview-Q4_K_M-GGUF`
31
+ **Original Model:** `QwQ-32B-Preview`
32
+ **Organization:** `Qwen`
33
+ **Quantized File:** `qwq-32b-preview-q4_k_m.gguf`
34
+ **Quantization:** `GGUF`
35
+ **Quantization Method:** `Q4_K_M`
36
+ **Use Imatrix:** `False`
37
+ **Split Model:** `False`
38
+
39
+ ## Overview
40
+ This is an GGUF Q4_K_M quantized version of [QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview).
41
+
42
+ ## Quantization By
43
+ I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models.
44
+ I hope the community finds these quantizations useful.
45
+
46
+ Andrew Webby @ [RolePlai](https://roleplai.app/)