roleplaiapp's picture
Upload README.md with huggingface_hub
a42914f verified
|
raw
history blame
824 Bytes
metadata
library_name: transformers
pipeline_tag: text-generation
tags:
  - 2-bit
  - 32b
  - Q2_K
  - deepseek
  - distill
  - gguf
  - llama-cpp
  - qwen
  - text-generation
  - uncensored

roleplaiapp/DeepSeek-R1-Distill-Qwen-32B-Uncensored-Q2_K-GGUF

Repo: roleplaiapp/DeepSeek-R1-Distill-Qwen-32B-Uncensored-Q2_K-GGUF Original Model: DeepSeek-R1-Distill-Qwen-32B-Uncensored Quantized File: DeepSeek-R1-Distill-Qwen-32B-Uncensored.Q2_K.gguf Quantization: GGUF Quantization Method: Q2_K

Overview

This is a GGUF Q2_K quantized version of DeepSeek-R1-Distill-Qwen-32B-Uncensored

Quantization By

I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.

Andrew Webby @ RolePlai.