|
--- |
|
license: apache-2.0 |
|
base_model: |
|
- Qwen/Qwen2.5-32B-Instruct |
|
tags: |
|
- roleplay |
|
- conversational |
|
language: |
|
- en |
|
--- |
|
# Qwen 2.5 32b RP Ink |
|
![image/png](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F634262af8d8089ebaefd410e%2F1_Zt_OvEW183lmrgidQw8.png%3C%2Fspan%3E)%3C!-- HTML_TAG_END --> |
|
|
|
A roleplay-focused LoRA finetune of Qwen 2.5 32b Instruct. Methodology and hyperparams inspired by [SorcererLM](https://huggingface.co/rAIfle/SorcererLM-8x22b-bf16) and [Slush](https://huggingface.co/crestf411/Q2.5-32B-Slush). |
|
Yet another model in the Ink series, following in the footsteps of [the Nemo one](https://huggingface.co/allura-org/MN-12b-RP-Ink) |
|
|
|
## Testimonials |
|
> whatever I tested was crack [...] It's got some refreshingly good prose, that's for sure |
|
|
|
\- TheLonelyDevil |
|
|
|
> The NTR is fantastic with this tune, lots of good gooning to be had. [...] Description and scene setting prose flows smoothly in comparison to larger models. |
|
|
|
\- TonyTheDeadly |
|
|
|
> This 32B handles complicated scenarios well, compared to a lot of 70Bs I've tried. Characters are portrayed accurately. |
|
|
|
\- Severian |
|
|
|
> From the very limited testing I did, I quite like this. [...] I really like the way it writes. |
|
> Granted, I'm completely shitfaced right now, but I'm pretty sure it's good. |
|
|
|
\- ALK |
|
|
|
> [This model portrays] my character card almost exactly the way that I write them. It's a bit of a dream to get that with many of the current LLM. |
|
|
|
\- ShotMisser64 |
|
|
|
## Dataset |
|
The worst mix of data you've ever seen. Like, seriously, you do not want to see the things that went into this model. It's bad. |
|
|
|
"this is like washing down an adderall with a bottle of methylated rotgut" - inflatebot |
|
|
|
## Quants |
|
- [Imatrix GGUFs (thanks, bart!)](https://huggingface.co/bartowski/Qwen2.5-32b-RP-Ink-GGUF) |
|
|
|
## Recommended Settings |
|
Chat template: ChatML |
|
Recommended samplers (not the be-all-end-all, try some on your own!): |
|
- Temp 0.85 / Top P 0.8 / Top A 0.3 / Rep Pen 1.03 |
|
- Your samplers can go here! :3 |
|
|
|
## Hyperparams |
|
### General |
|
- Epochs = 1 |
|
- LR = 6e-5 |
|
- LR Scheduler = Cosine |
|
- Optimizer = Paged AdamW 8bit |
|
- Effective batch size = 16 |
|
### LoRA |
|
- Rank = 16 |
|
- Alpha = 32 |
|
- Dropout = 0.25 (Inspiration: [Slush](https://huggingface.co/crestf411/Q2.5-32B-Slush)) |
|
|
|
## Credits |
|
Humongous thanks to the people who created the data. I would credit you all, but that would be cheating ;) |
|
Big thanks to all Allura members, for testing and emotional support ilya /platonic |
|
especially to inflatebot who made the model card's image :3 |
|
Another big thanks to all the members of the ArliAI Discord server for testing! All of the people featured in the testimonials are from there :3 |