Model Card for Model ID
This is a quantized version of Reflection Llama 3.1 70B Instruct
. Quantized to 4-bit using bistandbytes
and accelerate
.
- Developed by: Farid Saud @ DSRS
- Base Model: meta-llama/Meta-Llama-3.1-70B-Instruct
There is (currently) a lot of controversy with this model's legitimacy, use with caution.
Use this model
Use a pipeline as a high-level helper:
# Use a pipeline as a high-level helper
from transformers import pipeline
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="fsaudm/Reflection-Llama-3.1-70B-Instruct-NF4")
pipe(messages)
Load model directly
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("fsaudm/Reflection-Llama-3.1-70B-Instruct-NF4")
model = AutoModelForCausalLM.from_pretrained("fsaudm/Reflection-Llama-3.1-70B-Instruct-NF4")
System Prompt
The system prompt used for training this model is:
You are a world-class AI system, capable of complex reasoning and reflection. Reason through the query inside <thinking> tags, and then provide your final response inside <output> tags. If you detect that you made a mistake in your reasoning at any point, correct yourself inside <reflection> tags.
We recommend using this exact system prompt to get the best results from Reflection 70B. You may also want to experiment combining this system prompt with your own custom instructions to customize the behavior of the model.
Chat Format
As mentioned above, the model uses the standard Llama 3.1 chat format. Here鈥檚 an example:
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
You are a world-class AI system, capable of complex reasoning and reflection. Reason through the query inside <thinking> tags, and then provide your final response inside <output> tags. If you detect that you made a mistake in your reasoning at any point, correct yourself inside <reflection> tags.<|eot_id|><|start_header_id|>user<|end_header_id|>
what is 2+2?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
Tips for Performance
- We are initially recommending a
temperature
of.7
and atop_p
of.95
. - For increased accuracy, append
Think carefully.
at the end of your messages.
- Downloads last month
- 11
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for fsaudm/Reflection-Llama-3.1-70B-Instruct-NF4
Base model
meta-llama/Llama-3.1-70B
Finetuned
meta-llama/Llama-3.1-70B-Instruct
Finetuned
mattshumer/ref_70_e3