Configuration Parsing
Warning:
In config.json: "quantization_config.bits" must be an integer
72B-Qwen2.5-Kunou-v1 - EXL2 2.25bpw
This is a 2.25bpw EXL2 quant of Sao10K/72B-Qwen2.5-Kunou-v1
Details about the model can be found at the above model page.
EXL2 Version
These quants were made with exllamav2 version 0.2.4. Quants made on this version of EXL2 may not work on older versions of the exllamav2 library.
If you have problems loading these models, please update Text Generation WebUI to the latest version.
Perplexity Scoring
Below are the perplexity scores for the EXL2 models. A lower score is better.
Quant Level | Perplexity Score |
---|---|
4.5 | 5.0995 |
4.0 | 5.1480 |
3.5 | 5.3055 |
3.0 | 5.6398 |
2.75 | 6.0205 |
2.5 | 6.5372 |
2.25 | 7.2350 |
- Downloads last month
- 2
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for Dracones/72B-Qwen2.5-Kunou-v1_exl2_2.25bpw
Base model
Qwen/Qwen2.5-72B
Finetuned
Qwen/Qwen2.5-72B-Instruct
Finetuned
Sao10K/72B-Qwen2.5-Kunou-v1