mradermacher
commited on
auto-patch README.md
Browse files
README.md
CHANGED
@@ -4,6 +4,7 @@ datasets:
|
|
4 |
- amphora/QwQ-LongCoT-130K
|
5 |
language:
|
6 |
- en
|
|
|
7 |
library_name: transformers
|
8 |
license: mit
|
9 |
license_link: https://huggingface.co/microsoft/phi-4/resolve/main/LICENSE
|
@@ -16,6 +17,8 @@ tags:
|
|
16 |
- chat
|
17 |
- conversational
|
18 |
- phi3
|
|
|
|
|
19 |
---
|
20 |
## About
|
21 |
|
@@ -27,7 +30,7 @@ tags:
|
|
27 |
static quants of https://huggingface.co/Pinkstack/SuperThoughts-CoT-14B-16k-o1-QwQ
|
28 |
|
29 |
<!-- provided-files -->
|
30 |
-
weighted/imatrix quants
|
31 |
## Usage
|
32 |
|
33 |
If you are unsure how to use GGUF files, refer to one of [TheBloke's
|
|
|
4 |
- amphora/QwQ-LongCoT-130K
|
5 |
language:
|
6 |
- en
|
7 |
+
- multilingual
|
8 |
library_name: transformers
|
9 |
license: mit
|
10 |
license_link: https://huggingface.co/microsoft/phi-4/resolve/main/LICENSE
|
|
|
17 |
- chat
|
18 |
- conversational
|
19 |
- phi3
|
20 |
+
- reasoning
|
21 |
+
- CoT
|
22 |
---
|
23 |
## About
|
24 |
|
|
|
30 |
static quants of https://huggingface.co/Pinkstack/SuperThoughts-CoT-14B-16k-o1-QwQ
|
31 |
|
32 |
<!-- provided-files -->
|
33 |
+
weighted/imatrix quants are available at https://huggingface.co/mradermacher/SuperThoughts-CoT-14B-16k-o1-QwQ-i1-GGUF
|
34 |
## Usage
|
35 |
|
36 |
If you are unsure how to use GGUF files, refer to one of [TheBloke's
|