datasets: code.evol.instruct.wiz.oss_python.json

==((====))==  Unsloth - 2x faster free finetuning | Num GPUs = 1
   \\   /|    Num examples = 937 | Num Epochs = 2
O^O/ \_/ \    Batch size per device = 2 | Gradient Accumulation steps = 256
\        /    Total batch size = 512 | Total steps = 2
 "-____-"     Number of trainable parameters = 201,850,880
 [2/2 22:36, Epoch 1/2]
Step	Training Loss
1	0.707400
2	0.717800

Uploaded model

  • Developed by: Ramikan-BR
  • License: apache-2.0
  • Finetuned from model : unsloth/tinyllama-chat-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
178
Safetensors
Model size
1.1B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for Ramikan-BR/tinyllama-coder-py-v11

Quantized
(75)
this model

Dataset used to train Ramikan-BR/tinyllama-coder-py-v11

Space using Ramikan-BR/tinyllama-coder-py-v11 1