essay generating branch of MHENN. 4-bit quantized model can be found at file "mhennlitQ4_K_M.gguf"

finetuned for 650 steps on an nvidia v100 on a google colab instance. finetuned on the netcat420/quiklit dataset.

https://huggingface.co/mistralai/Mistral-7B-v0.1 <--------- BASE MODEL

Downloads last month
3
Safetensors
Model size
7.24B params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Dataset used to train netcat420/MHENNlit