huihui-ai/QwQ-32B-Coder-Fusion-8020

Overview

QwQ-32B-Coder-Fusion-8020 is a mixed model that combines the strengths of two powerful Qwen-based models: huihui-ai/QwQ-32B-Preview-abliterated and huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated.
The weights are blended in a 8:2 ratio, with 80% of the weights from QwQ-32B-Preview-abliterated and 20% from the abliterated Qwen2.5-Coder-32B-Instruct-abliterated model. Although it's a simple mix, the model is usable, and no gibberish has appeared. This is an experiment. I test the 9:1, 8:2, and 7:3 ratios separately to see how much impact they have on the model.

Model Details

ollama

You can use huihui_ai/qwq-fusion:32b-8020 directly,

ollama run huihui_ai/qwq-fusion:32b-8020

Other proportions can be obtained by visiting huihui_ai/qwq-fusion.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Apel-sin/qwq-32b-coder-fusion-8020-exl2