---
base_model: unsloth/meta-llama-3.1-8b-bnb-4bit
language:
- en
license: apache-2.0
tags:
- unsloth
- trl
- sft
- theprint
- ReWiz
datasets:
- KingNish/reasoning-base-20k
- arcee-ai/EvolKit-20k
- cognitivecomputations/WizardLM_alpaca_evol_instruct_70k_unfiltered
---
Half the data was geared towards better reasoning (EvolKit-20k and reasoning-base-20k), the other half will help to de-censor the model (WizardLM data set).
# Uploaded model
- **Developed by:** theprint
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[](https://github.com/unslothai/unsloth)