essay generating branch of MHENN. 4-bit quantized model can be found at file "mhennlitQ4_K_M.gguf"
finetuned for 650 steps on an nvidia v100 on a google colab instance. finetuned on the netcat420/quiklit dataset.
https://huggingface.co/mistralai/Mistral-7B-v0.1 <--------- BASE MODEL
- Downloads last month
- 3
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.