--- base_model: Qwen/Qwen2.5-14B-Instruct tags: - fluently-lm - fluently-sets - demo - reasoning - text-generation-inference - transformers - unsloth - qwen2 - trl - sft license: apache-2.0 language: - en datasets: - fluently-sets/reasoning-1-1k pipeline_tag: text-generation --- # Reasoning-1 1K Demo (Finetune of Qwen2.5-14B-IT on Reasoning-1-1k dataset) ***Q4_K_M GGUF-quant available [here](https://huggingface.co/fluently-sets/reasoning-1-1k-demo-Q4_K_M-GGUF)*** This is SFT-finetune Qwen2.5-14B-IT on Reasoning-1-1K dataset. This is far from a perfect model, its main purpose is to show an example of using the dataset. - **Base model**: [Qwen/Qwen2.5-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-14B-Instruct) - **Model type**: [Qwen2ForCausalLM](https://huggingface.co/models?other=qwen2) - **Number of parameters**: 14.8B - **Precision**: FP16 - **Training method**: SFT - **Training dataset**: [fluently-sets/reasoning-1-1k](https://huggingface.co/datasets/fluently-sets/reasoning-1-1k) - **Languages**: English (mostly) *Trained by Fluently Team ([@ehristoforu](https://huggingface.co/ehristoforu)) with [Unsloth AI](https://github.com/unslothai/unsloth) with lovešŸ„°* [](https://github.com/unslothai/unsloth)