roleplaiapp/Codestral-22B-v0.1-Q2_K-GGUF

Repo: roleplaiapp/Codestral-22B-v0.1-Q2_K-GGUF
Original Model: Codestral-22B-v0.1 Organization: mistralai Quantized File: codestral-22b-v0.1-q2_k.gguf Quantization: GGUF Quantization Method: Q2_K
Use Imatrix: False
Split Model: False

Overview

This is an GGUF Q2_K quantized version of Codestral-22B-v0.1.

Quantization By

I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.

Andrew Webby @ RolePlai

Downloads last month
0
GGUF
Model size
22.2B params
Architecture
llama

2-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for roleplaiapp/Codestral-22B-v0.1-Q2_K-GGUF

Quantized
(41)
this model