# (q4_2) GGML Open-Assistant SFT-6 LLaMa 30B 4-bit Quantized | |
4 bit version of Open-Assistant SFT-6 LLaMa 30B for llama.cpp quantized with q4_2 |
# (q4_2) GGML Open-Assistant SFT-6 LLaMa 30B 4-bit Quantized | |
4 bit version of Open-Assistant SFT-6 LLaMa 30B for llama.cpp quantized with q4_2 |