roleplaiapp commited on
Commit
00204af
·
verified ·
1 Parent(s): f655797

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +49 -0
README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - PowerInfer/QWQ-LONGCOT-500K
4
+ - PowerInfer/LONGCOT-Refine-500K
5
+ base_model:
6
+ - Qwen/Qwen2.5-3B-Instruct
7
+ pipeline_tag: text-generation
8
+ language:
9
+ - en
10
+ library_name: transformers
11
+ tags:
12
+ - llama-cpp
13
+ - SmallThinker-3B
14
+ - gguf
15
+ - Q5_0
16
+ - 3b
17
+ - SmallThinker
18
+ - qwen
19
+ - llama-cpp
20
+ - PowerInfer
21
+ - code
22
+ - math
23
+ - chat
24
+ - roleplay
25
+ - text-generation
26
+ - safetensors
27
+ - nlp
28
+ - code
29
+ ---
30
+
31
+ # roleplaiapp/SmallThinker-3B-Preview-Q5_0-GGUF
32
+
33
+ **Repo:** `roleplaiapp/SmallThinker-3B-Preview-Q5_0-GGUF`
34
+ **Original Model:** `SmallThinker-3B`
35
+ **Organization:** `PowerInfer`
36
+ **Quantized File:** `smallthinker-3b-preview-q5_0.gguf`
37
+ **Quantization:** `GGUF`
38
+ **Quantization Method:** `Q5_0`
39
+ **Use Imatrix:** `False`
40
+ **Split Model:** `False`
41
+
42
+ ## Overview
43
+ This is an GGUF Q5_0 quantized version of [SmallThinker-3B](https://huggingface.co/PowerInfer/SmallThinker-3B-Preview).
44
+
45
+ ## Quantization By
46
+ I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models.
47
+ I hope the community finds these quantizations useful.
48
+
49
+ Andrew Webby @ [RolePlai](https://roleplai.app/)