library_name: transformers | |
language: | |
- en | |
- fr | |
- it | |
- pt | |
- hi | |
- es | |
- th | |
- de | |
base_model: | |
- meta-llama/Llama-3.1-70B | |
tags: | |
- llama-cpp | |
- Llama-3.3-70B-Instruct | |
- gguf | |
- llama | |
- 70b | |
- Q6_K | |
- llama-cpp | |
- gguf | |
- meta-llama | |
- code | |
- math | |
- chat | |
- roleplay | |
- text-generation | |
- safetensors | |
- nlp | |
- code | |
pipeline_tag: text-generation | |
# Llama-3.3-70B-Instruct-Q6_K-GGUF | |
**Repo:** `roleplaiapp/Llama-3.3-70B-Instruct-Q6_K-GGUF` | |
**Original Model:** `Llama-3.3-70B-Instruct` | |
**Organization:** `meta-llama` | |
**Quantized File:** `llama-3.3-70b-instruct-q6_k.gguf` | |
**Quantization:** `GGUF` | |
**Quantization Method:** `Q6_K` | |
**Use Imatrix:** `False` | |
**Split Model:** `True` | |
## Overview | |
This is an GGUF Q6_K quantized version of [Llama-3.3-70B-Instruct](https://huggingface.co/meta-llama/Llama-3.3-70B-Instruct). | |
## Quantization By | |
I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models. | |
I hope the community finds these quantizations useful. | |
Andrew Webby @ [RolePlai](https://roleplai.app/) | |