IntelligentEstate/FromTheAshes-IQ4_NL-GGUF(Undergoing confirmation)

An importance matrix quantization of a merge of Cybertron from FBLGIT and a Tsunami model This model was converted to GGUF format from brgx53/3Blarenegv3-ECE-PRYMMAL-Martial using llama.cpp

Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

Downloads last month
24
GGUF
Model size
7.62B params
Architecture
qwen2

4-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for IntelligentEstate/FromTheAshes-IQ4_NL-GGUF

Quantized
(2)
this model

Dataset used to train IntelligentEstate/FromTheAshes-IQ4_NL-GGUF