dickheim/Gemma-2-9b-it-outcomes_orig-Q8_0-GGUF

This LoRA adapter was converted to GGUF format from dickheim/Gemma-2-9b-it-outcomes_orig via the ggml.ai's GGUF-my-lora space. Refer to the original adapter repository for more details.

Use with llama.cpp

# with cli
llama-cli -m base_model.gguf --lora Gemma-2-9b-it-outcomes_orig-q8_0.gguf (...other args)

# with server
llama-server -m base_model.gguf --lora Gemma-2-9b-it-outcomes_orig-q8_0.gguf (...other args)

To know more about LoRA usage with llama.cpp server, refer to the llama.cpp server documentation.

Downloads last month
0
GGUF
Model size
54M params
Architecture
gemma2

8-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for dickheim/Gemma-2-9b-it-outcomes_orig-Q8_0-GGUF

Quantized
(2)
this model