Mistral 7B v0.2 - AWQ GGUF
These files are in GGUF format.
- Model creator: Mistralai
- Original model: Mistral-7B-Instruct-v0.2
The model was converted by the combination of llama.cpp and quantization method AWQ
How to use models in llama.cpp
./main -m Mistral-7b-v0.1-Q2_K.gguf -n 128 --prompt "Once upon a time"
- Downloads last month
- 4
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API has been turned off for this model.