--- base_model: meta-llama/Llama-3.1-8B-Instruct tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en datasets: - kobprof/skolegpt-instruct - Mabeck/Danish-SlimOrca --- # Uploaded model - **Compute sponsored by:** Nvidia, Arrow ECS Denmark through Danish Data Science Community - **Developed by:** ThatsGroes - **License:** apache-2.0 - **Finetuned from model :** meta-llama/Llama-3.1-8B-Instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. Fine tuned LoRA adapter in fp16 for 1 epoch on kobprof/skolegpt-instruct and Mabeck/Danish-SlimOrca with rank=alpha=64 [codecarbon INFO @ 10:41:13] Energy consumed for RAM : 2.822621 kWh. RAM Power : 188.78840446472168 W [codecarbon INFO @ 10:41:13] Energy consumed for all GPUs : 4.379013 kWh. Total GPU Power : 260.7733742516678 W [codecarbon INFO @ 10:41:13] Energy consumed for all CPUs : 0.635721 kWh. Total CPU Power : 42.5 W [codecarbon INFO @ 10:41:13] 7.837356 kWh of electricity used since the beginning. [](https://github.com/unslothai/unsloth)