Weyaxi's picture
add things I already know pre model card 2
e4bec07 verified
|
raw
history blame
4.2 kB
metadata
license: other
tags:
  - math
  - alpaca
  - synthetic data
  - instruct
  - axolotl
  - finetune
  - gpt4
datasets:
  - TIGER-Lab/MathInstruct
  - microsoft/orca-math-word-problems-200k
language:
  - en
base_model: meta-math/MetaMath-Mistral-7B

image/png

🔢 Einstein-v6-7B

This model is a full fine-tuned version of meta-math/MetaMath-Mistral-7B on the following datasets:

This model is finetuned using 8xRTX3090 + 1xRTXA6000 using axolotl.

This model's training was sponsored by sablo.ai.

See axolotl config

axolotl version: 0.4.0

base_model: meta-math/MetaMath-Mistral-7B
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true

load_in_8bit: false
load_in_4bit: false
strict: false

chat_template: alpaca
datasets:
  - path: microsoft/orca-math-word-problems-200k
    type: alpaca_chat.load_qa
    conversation: alpaca

  - path: TIGER-Lab/MathInstruct
    type: alpaca
    conversation: alpaca

dataset_prepared_path: last_run_prepared
val_set_size: 0.005
#val_set_size: 0.0

output_dir: ./EulerMath-Mistral-7B-model

sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true
eval_sample_packing: false

wandb_project: Euler
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
hub_model_id: Weyaxi/EulerMath-Mistral-7B

save_safetensors: true

gradient_accumulation_steps: 4
micro_batch_size: 2 # changed
num_epochs: 2
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.000005

train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_steps: 10
evals_per_epoch: 4 # changed
eval_table_size:
eval_table_max_new_tokens: 128
saves_per_epoch: 1 # changed
debug:

deepspeed: zero3_bf16.json
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
  bos_token: "<s>"
  eos_token: "</s>"
  unk_token: "<unk>"

💬 Prompt Template

You can use this prompt template while using the model:

Alpaca

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{instruction}

### Response:

This prompt template is available as a chat template, which means you can format messages using the tokenizer.apply_chat_template() method:

messages = [
    {"role": "system", "content": "You are helpful AI asistant."},
    {"role": "user", "content": "Hello!"}
]
gen_input = tokenizer.apply_chat_template(message, return_tensors="pt")
model.generate(**gen_input)

🔄 Quantizationed versions

Quantizationed versions of this model is currently not available. It will be available soon :)

🎯 Open LLM Leaderboard Evaluation Results

🤖 Additional information about training

This model is full fine-tuned for 2 epoch.

Total number of steps was 544.

Loss graph

🤝 Acknowledgments

Thanks to sablo.ai for sponsoring this model.

Thanks to all the dataset authors mentioned in the datasets section.

Thanks to axolotl for making the repository I used to make this model.

Thanks to all open source AI community.

Built with Axolotl

If you would like to support me:

☕ Buy Me a Coffee