roleplaiapp/Virtuoso-Lite-Q5_K_M-GGUF

Repo: roleplaiapp/Virtuoso-Lite-Q5_K_M-GGUF Original Model: Virtuoso-Lite Quantized File: Virtuoso-Lite.Q5_K_M.gguf Quantization: GGUF Quantization Method: Q5_K_M

Overview

This is a GGUF Q5_K_M quantized version of Virtuoso-Lite

Quantization By

I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.

Andrew Webby @ RolePlai.

Downloads last month
2
GGUF
Model size
10.3B params
Architecture
llama

5-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.