roleplaiapp/QwQ-32B-Preview-Q6_K-GGUF
Repo: roleplaiapp/QwQ-32B-Preview-Q6_K-GGUF
Original Model: QwQ-32B-Preview
Organization: Qwen
Quantized File: qwq-32b-preview-q6_k.gguf
Quantization: GGUF
Quantization Method: Q6_K
Use Imatrix: False
Split Model: False
Overview
This is an GGUF Q6_K quantized version of QwQ-32B-Preview.
Quantization By
I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.
Andrew Webby @ RolePlai
- Downloads last month
- 1
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.