Transformers
GGUF
English
llama
text-generation-inference
unsloth
Inference Endpoints
conversational

Uploaded model

  • Developed by: Quazim0t0
  • Finetuned from model : unsloth/phi-4-unsloth-bnb-4bit
  • GGUF
  • Trained for 4-5 Hours on A800 with the MagPie-Reasoning-V2-CoT-DeepSeek-R1-Llama-70B & ServiceNow-AI/R1-Distill-SFT.
  • 5$ Training...I'm actually amazed by the results.

If using this model for Open WebUI here is a simple function to organize the models responses: https://openwebui.com/f/quaz93/phithink/

Downloads last month
11
GGUF
Model size
14.7B params
Architecture
llama

4-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for Quazim0t0/ThinkPhi.Turn1.1-q4_k_m-GGUF

Base model

microsoft/phi-4
Quantized
(33)
this model

Datasets used to train Quazim0t0/ThinkPhi.Turn1.1-q4_k_m-GGUF