roleplaiapp's picture
Upload README.md with huggingface_hub
28dff24 verified
|
raw
history blame
703 Bytes
---
library_name: transformers
pipeline_tag: text-generation
tags:
- 12b
- 4-bit
- Q4_K_S
- gguf
- llama-cpp
- mag
- mell
- text-generation
---
# roleplaiapp/MN-12B-Mag-Mell-R1-Q4_K_S-GGUF
**Repo:** `roleplaiapp/MN-12B-Mag-Mell-R1-Q4_K_S-GGUF`
**Original Model:** `MN-12B-Mag-Mell-R1`
**Quantized File:** `MN-12B-Mag-Mell-R1.Q4_K_S.gguf`
**Quantization:** `GGUF`
**Quantization Method:** `Q4_K_S`
## Overview
This is a GGUF Q4_K_S quantized version of MN-12B-Mag-Mell-R1
## Quantization By
I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models.
I hope the community finds these quantizations useful.
Andrew Webby @ [RolePlai](https://roleplai.app/).