llama.cpp
These files are in GGUF format.
The model was converted by the combination of llama.cpp and quantization method AWQ
./main -m ggml-model-q4_0-awq.gguf -n 128 --prompt "Once upon a time"
Please refer to the instructions at the PR
2-bit