roleplaiapp commited on
Commit
dc9086c
·
verified ·
1 Parent(s): f171e98

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +43 -0
README.md ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - llama-cpp
7
+ - DeepSeek-R1-Distill-Llama-8B
8
+ - gguf
9
+ - Q3_K_L
10
+ - 8b
11
+ - llama
12
+ - DeepSeek-R1
13
+ - llama-cpp
14
+ - deepseek-ai
15
+ - code
16
+ - math
17
+ - chat
18
+ - roleplay
19
+ - text-generation
20
+ - safetensors
21
+ - nlp
22
+ - code
23
+ ---
24
+
25
+ # roleplaiapp/DeepSeek-R1-Distill-Llama-8B-Q3_K_L-GGUF
26
+
27
+ **Repo:** `roleplaiapp/DeepSeek-R1-Distill-Llama-8B-Q3_K_L-GGUF`
28
+ **Original Model:** `DeepSeek-R1-Distill-Llama-8B`
29
+ **Organization:** `deepseek-ai`
30
+ **Quantized File:** `deepseek-r1-distill-llama-8b-q3_k_l.gguf`
31
+ **Quantization:** `GGUF`
32
+ **Quantization Method:** `Q3_K_L`
33
+ **Use Imatrix:** `False`
34
+ **Split Model:** `False`
35
+
36
+ ## Overview
37
+ This is an GGUF Q3_K_L quantized version of [DeepSeek-R1-Distill-Llama-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B).
38
+
39
+ ## Quantization By
40
+ I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models.
41
+ I hope the community finds these quantizations useful.
42
+
43
+ Andrew Webby @ [RolePlai](https://roleplai.app/)