Cebu-llama-v0.1

  • Experimental Cebuano fine-tune: safe or accurate outputs not guaranteed (not for production use)!
  • Observing fine-tune effects on dataset of roughly 10k lines of Cebuano (roughly formatted/"raw" chat)
  • Trained on llama 7b chat for 1 epoch
  • May still generate English, Tagalog, Taglish, or gibberish.
  • QLoras (hf and GGML)
Downloads last month
8
Safetensors
Model size
6.74B params
Tensor type
F32
ยท
FP16
ยท
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Space using 922-Narra/llama-2-7b-chat-cebuano-v0.1 1