--- license: llama3 --- Based on Meta-Llama-3-8b-Instruct, and is governed by Meta Llama 3 License agreement: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct Base model: https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3 SFT fine tune of Meta Llama 3 8B Instruct Abliterated v3 by Failspy using an improved Dolphin and WizardLM dataset intended to remove GPT-isms and make the model follow instructions more exactly while paying attention to details better. Since it is based on the Abliterated version of Llama 3 8B Instruct it should naturally not refuse to answer in the first place and this fine tuning should make it comply even better. We also have it up on our site https://awanllm.com for anyone to try! Training: - Full 8192 sequence length - Training duration is around 2.5 days on an RTX 4090 - 1 epoch training with a massive dataset for minimized repetition sickness. - Using 4-bit loading and Qlora 64-rank 64-alpha resulting in ~2% trainable weights. Llama 3 Instruct format: ``` <|begin_of_text|><|start_header_id|>system<|end_header_id|> {{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|> {{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|> {{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|> ``` Quants: FP16: https://huggingface.co/AwanLLM/Awanllm-Llama-3-8B-Dolfin-v1.0 GGUF: https://huggingface.co/AwanLLM/Awanllm-Llama-3-8B-Dolfin-v1.0-GGUF