stablelm-zephyr-3B-localmentor-GGUF

Model creator: remyxai
Original model: stablelm-zephyr-3B_localmentor
GGUF quantization: llama.cpp commit fadde6713506d9e6c124f5680ab8c7abebe31837

Description

Fine-tune with low-rank adapters on 25K conversational turns discussing tech/startup from over 800 podcast episodes.

Prompt Template

Following the tokenizer_config.json, the prompt template is Zephyr.

<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
Downloads last month
15
GGUF
Model size
2.8B params
Architecture
stablelm

2-bit

3-bit

4-bit

5-bit

6-bit

16-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the HF Inference API does not support llama.cpp models with pipeline type text-generation

Model tree for mgonzs13/stablelm-zephyr-3B-localmentor-GGUF

Quantized
(1)
this model