--- base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-70B library_name: transformers pipeline_tag: text-generation tags: - llama-cpp - DeepSeek-R1-Distill-Llama-70B - gguf - Q4_K_S - 70b - llama - deepseek-ra - llama-cpp - deepseek-ai - code - math - chat - roleplay - text-generation - safetensors - nlp - code --- # roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Q4_K_S-GGUF **Repo:** `roleplaiapp/DeepSeek-R1-Distill-Llama-70B-Q4_K_S-GGUF` **Original Model:** `DeepSeek-R1-Distill-Llama-70B` **Organization:** `deepseek-ai` **Quantized File:** `deepseek-r1-distill-llama-70b-q4_k_s.gguf` **Quantization:** `GGUF` **Quantization Method:** `Q4_K_S` **Use Imatrix:** `False` **Split Model:** `False` ## Overview This is an GGUF Q4_K_S quantized version of [DeepSeek-R1-Distill-Llama-70B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-70B). ## Quantization By I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful. Andrew Webby @ [RolePlai](https://roleplai.app/)