Uploaded model

Using real-world user data from a previous farmer assistant chatbot service and additional curated datasets (prioritizing sustainable regenerative organic farming practices,) Gemma 2B and Mistral 7B LLMs were iteratively fine-tuned and tested against eachother as well as basic benchmarking, whereby the Gemma 2B fine-tune emerged victorious. LORA adapters were saved for each model.

V3 here scored better in agriculture-focused prelim testing than V1 or V2 of the Mistral series of fine-tunes for the selected dataset.

This mistral model was trained with Unsloth and Huggingface's TRL library.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for Solshine/LORA-Adapters-Mistral7B-NaturalFarmerV3

Finetuned
(419)
this model