Lytta 2.5 32B Instruct
Lytta 2.5 32B Instruct is a normalized denoised fourier interpolation of the following models:
output_base_model: "maldv/Qwentile2.5-32B-Instruct"
finetune_merge:
- { "model": "prnshv/ORANSight_Qwen_32B_Instruct", "base": "Qwen/Qwen2.5-32B-Instruct", "alpha": 0.3 }
- { "model": "crestf411/Q2.5-32B-Slush", "base": "Qwen/Qwen2.5-32B-Instruct", "alpha": 0.5 }
- { "model": "allura-org/Qwen2.5-32b-RP-Ink", "base": "Qwen/Qwen2.5-32B-Instruct", "alpha": 0.3 }
- { "model": "Sao10K/32B-Qwen2.5-Kunou-v1", "base": "Qwen/Qwen2.5-32B-Instruct", "alpha": 1.0 }
- { "model": "huihui-ai/QwQ-32B-Preview-abliterated", "base": "Qwen/Qwen2.5-32B", "alpha": 0.75 }
- { "model": "Saxo/Linkbricks-Horizon-AI-Japanese-Base-32B", "base": "Qwen/Qwen2.5-32B", "alpha": 0.5 }
In other words, all of these models get warped and interpolated in signal space, and then jammed back on Qwentile.
What is this?
I had a request to make Qwentile have more thought, but I think in doing so I might have made it... unhinged? ¯\_(ツ)_/¯
Better? Worse? Judge for yourself. While it seems to have lost some of it's instruction following ability, it's writing quality and levels of creativity are swinging well above it's weight.
Citation
If you find our work helpful, feel free to give us a cite.
@misc{lytta2.5-32b-instruct,
title = {Lytta 2.5 32B Instruct},
url = {https://huggingface.co/maldv/Lytta2.5-32B-Instruct},
author = {Praxis Maldevide},
month = {January},
year = {2025}
}
- Downloads last month
- 18
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for maldv/Lytta2.5-32B-Instruct
Base model
Qwen/Qwen2.5-32B
Finetuned
Qwen/Qwen2.5-32B-Instruct
Finetuned
Sao10K/32B-Qwen2.5-Kunou-v1