|
--- |
|
base_model: [] |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
- llama |
|
- not-for-all-audiences |
|
license: other |
|
--- |
|
# Silver-Sun-v2-11B |
|
|
|
![image/png](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F65d936ad52eca001fdcd3245%2F9DobeVeyL98G7QUufEeQg.png%3C%2Fspan%3E)%3C!-- HTML_TAG_END --> |
|
|
|
> This is an updated version of Silver-Sun-11B. The change is that now the Solstice-FKL-v2-10.7B merge uses Sao10K/Fimbulvetr-11B-v2 instead of v1. |
|
> Additionally, the config of the original Silver-Sun was wrong, and I have also updated that. |
|
> As expected, this is a HIGHLY uncensored model. It should perform even better than the v1 due to the updated Fimb, and the fixed config. |
|
|
|
**Works with Alpaca, and from my tests, also ChatML. However Alpaca may be a better option. Try it out and use whatever works better for you.** |
|
**Due to a quirk with Solar, if you want the best quality either launch at 4K context, or launch at 8K (and possibly beyond - have not tested it that high) with 4k context pre-loaded in the prompt.** |
|
|
|
> This model is intended for fictional storytelling and writing, focusing on NSFW capabilities and lack of censorship for RP reasons. |
|
|
|
[GGUF / IQ / Imatrix](https://huggingface.co/ABX-AI/Silver-Sun-v2-11B-GGUF-IQ-Imatrix) |
|
|
|
## Merge Details |
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
|
|
### Merge Method |
|
|
|
This model was merged using the SLERP merge method. |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* [Himitsui/Kaiju-11B](https://huggingface.co/Himitsui/Kaiju-11B) |
|
* ABX-AI/Solstice-FKL-v2-10.7B |
|
>[!NOTE] |
|
>A mixture of [Sao10K/Solstice-11B-v1](https://huggingface.co/Sao10K/Solstice-11B-v1) and |
|
>[ABX-AI/Fimbulvetr-Kuro-Lotus-v2-10.7B] which is updated saishf/Fimbulvetr-Kuro-Lotus-10.7B with Fimb v2 |
|
|
|
### OpenLLM Eval Results |
|
|
|
[Detailed Results + Failed GSM8K](https://huggingface.co/datasets/open-llm-leaderboard/details_ABX-AI__Silver-Sun-v2-11B) |
|
|
|
|
|
>[!NOTE] |
|
>I had to remove GSM8K from the results and manually re-average the rest. GSM8K failed due to some issue with formatting, which is not experienced during practical usage. |
|
>By removing the GSM8K score, the average is VERY close to upstage/SOLAR-10.7B-v1.0 (74.20), which would make sense. |
|
>Feel free to ignore the actual average and use the other scores individually for reference. |
|
|
|
| Metric |Value| |
|
|---------------------------------|----:| |
|
|Avg. |74.04| |
|
|AI2 Reasoning Challenge (25-Shot)|69.88| |
|
|HellaSwag (10-Shot) |87.81| |
|
|MMLU (5-Shot) |66.74| |
|
|TruthfulQA (0-shot) |62.49| |
|
|Winogrande (5-shot) |83.27| |
|
|
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
slices: |
|
- sources: |
|
- model: ./MODELS/Solstice-FKL-v2-10.7B |
|
layer_range: [0, 48] |
|
- model: Himitsui/Kaiju-11B |
|
layer_range: [0, 48] |
|
merge_method: slerp |
|
base_model: ./MODELS/Solstice-FKL-v2-10.7B |
|
parameters: |
|
t: |
|
- filter: self_attn |
|
value: [0.6, 0.7, 0.8, 0.9, 1] |
|
- filter: mlp |
|
value: [0.4, 0.3, 0.2, 0.1, 0] |
|
- value: 0.5 |
|
dtype: bfloat16 |
|
``` |