|
--- |
|
library_name: transformers |
|
license: other |
|
license_name: qwen |
|
license_link: https://huggingface.co/Qwen/Qwen2.5-32B-Instruct/blob/main/LICENSE |
|
base_model: Sao10K/32B-Qwen2.5-Kunou-v1 |
|
tags: |
|
- generated_from_trainer |
|
- llama-cpp |
|
- gguf-my-repo |
|
model-index: |
|
- name: 32B-Qwen2.5-Kunou-v1 |
|
results: [] |
|
--- |
|
|
|
# Triangle104/32B-Qwen2.5-Kunou-v1-Q5_K_S-GGUF |
|
This model was converted to GGUF format from [`Sao10K/32B-Qwen2.5-Kunou-v1`](https://huggingface.co/Sao10K/32B-Qwen2.5-Kunou-v1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. |
|
Refer to the [original model card](https://huggingface.co/Sao10K/32B-Qwen2.5-Kunou-v1) for more details on the model. |
|
|
|
--- |
|
Model details: |
|
- |
|
I do not really have anything planned for this model other than it being a generalist, and Roleplay Model? It was just something made and planned in minutes. |
|
Same with the 14B and 72B version. |
|
Kunou's the name of an OC I worked on for a couple of years, for a... fanfic. mmm... |
|
|
|
A kind-of successor to L3-70B-Euryale-v2.2 in all but name? I'm keeping Stheno/Euryale lineage to Llama series for now. |
|
I had a version made on top of Nemotron, a supposed Euryale 2.4 but that flopped hard, it was not my cup of tea. |
|
This version is basically a better, more cleaned up Dataset used on Euryale and Stheno. |
|
|
|
Recommended Model Settings | Look, I just use these, they work fine enough. I don't even know how DRY or other meme samplers work. Your system prompt matters more anyway. |
|
|
|
Prompt Format: ChatML |
|
Temperature: 1.1 |
|
min_p: 0.1 |
|
|
|
Future-ish plans: |
|
- Complete this model series. |
|
- Further refine the Datasets used for quality, more secondary chats, more creative-related domains. (Inspired by Drummer) |
|
- Work on my other incomplete projects. About half a dozen on the backburner for a while now. |
|
|
|
Special thanks to my wallet for funding this, my juniors who share a single braincell between them, and my current national service. |
|
Stay safe. There have been more emergency calls, more incidents this holiday season. |
|
|
|
Also sorry for the inactivity. Life was in the way. It still is, just less so, for now. Burnout is a thing, huh? |
|
|
|
https://sao10k.carrd.co/ for contact. |
|
|
|
--- |
|
## Use with llama.cpp |
|
Install llama.cpp through brew (works on Mac and Linux) |
|
|
|
```bash |
|
brew install llama.cpp |
|
|
|
``` |
|
Invoke the llama.cpp server or the CLI. |
|
|
|
### CLI: |
|
```bash |
|
llama-cli --hf-repo Triangle104/32B-Qwen2.5-Kunou-v1-Q5_K_S-GGUF --hf-file 32b-qwen2.5-kunou-v1-q5_k_s.gguf -p "The meaning to life and the universe is" |
|
``` |
|
|
|
### Server: |
|
```bash |
|
llama-server --hf-repo Triangle104/32B-Qwen2.5-Kunou-v1-Q5_K_S-GGUF --hf-file 32b-qwen2.5-kunou-v1-q5_k_s.gguf -c 2048 |
|
``` |
|
|
|
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. |
|
|
|
Step 1: Clone llama.cpp from GitHub. |
|
``` |
|
git clone https://github.com/ggerganov/llama.cpp |
|
``` |
|
|
|
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). |
|
``` |
|
cd llama.cpp && LLAMA_CURL=1 make |
|
``` |
|
|
|
Step 3: Run inference through the main binary. |
|
``` |
|
./llama-cli --hf-repo Triangle104/32B-Qwen2.5-Kunou-v1-Q5_K_S-GGUF --hf-file 32b-qwen2.5-kunou-v1-q5_k_s.gguf -p "The meaning to life and the universe is" |
|
``` |
|
or |
|
``` |
|
./llama-server --hf-repo Triangle104/32B-Qwen2.5-Kunou-v1-Q5_K_S-GGUF --hf-file 32b-qwen2.5-kunou-v1-q5_k_s.gguf -c 2048 |
|
``` |
|
|