Model description

This model is a fine-tuned version of Qwen/Qwen2.5-7B-Instruct on the Bespoke-Stratos-17k dataset. The dataset is derived by distilling DeepSeek-R1 using the data pipeline of Berkeley NovaSky’s Sky-T1 with some modifications. More info in the dataset card at Bespoke-Stratos-17k. It outperforms Qwen-2.5-7B-Instruct on math reasoning benchmarks:

Bespoke-Stratos-7B Qwen2.5-7B-Instruct DeepSeek-R1-Distill-Qwen-7B (Ours) DeepSeek-R1-Distill-Qwen-7B (Reported)
AIME2024 20.0 10.0 43.3 55.5
MATH500 82.0 74.2 89.4 92.8
GPQA-Diamond 37.8 33.3 44.9 49.1
LiveCodeBench v2 Easy 71.4 65.9 81.3 -
LiveCodeBench v2 Medium 25.5 18.9 42.2 -
LiveCodeBench v2 Hard 1.6 3.3 2.4 -
LiveCodeBench v2 All 36.1 31.9 46.6 -

Note that the authors of Sky-T1 had noted that they saw little or no improvement in training 7B or 14B models with their data. However, see an improvement, though not at the scale of DeepSeek's distilled model. The reason could be that we used 17k examples, while DeepSeek seems to have used 800k.

Intended uses & limitations

Apache 2.0 License

Training procedure

We used 8xH100 to train the model for 7 hours.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 12
  • total_train_batch_size: 96
  • total_eval_batch_size: 64
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 3.0

Training results

Framework versions

  • Transformers 4.46.1
  • Pytorch 2.5.1+cu124
  • Datasets 3.1.0
  • Tokenizers 0.20.3
Downloads last month
45
Safetensors
Model size
7.62B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for bespokelabs/Bespoke-Stratos-7B

Base model

Qwen/Qwen2.5-7B
Finetuned
(212)
this model
Quantizations
2 models

Dataset used to train bespokelabs/Bespoke-Stratos-7B