Llama-3-8B-sft-lora

This model is a fine-tuned version of meta-llama/Meta-Llama-3-8B-Instruct on the torchtune's default dataset. It achieves the following results on the evaluation set:

  • Loss: 0.8701457381248474
Downloads last month
0
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for buvnswrn/Meta-Llama3-8B-Instruct-FineTuned

Adapter
(661)
this model