YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Qwen2-7B-SFT-Step-DPO - GGUF

Original model description:

license: apache-2.0

Step-DPO: Step-wise Preference Optimization for Long-chain Reasoning of LLMs

๐Ÿ–ฅ๏ธCode | ๐Ÿค—Data | ๐Ÿ“„Paper

This repo contains the Qwen2-7B-SFT-Step-DPO model. It is obtained by performing Step-DPO on Qwen2-7B-SFT.

Step-DPO is a simple, effective, and data-efficient method for boosting the mathematical reasoning ability of LLMs. Notably, Step-DPO, when applied to Qwen2-72B-Instruct, achieves scores of 70.8% and 94.0% on the test sets of MATH and GSM8K without bells and wistles, respectively, surpassing a series of closed-source models, including GPT-4-1106, Claude-3-Opus, and Gemini-1.5-Pro.

Contact

Please submit an issue here or send me an email here.

Downloads last month
13
GGUF
Model size
7.62B params
Architecture
qwen2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .