Silver-Sun-v2-11B / README.md
ABX-AI's picture
Update README.md
052f551 verified
metadata
base_model: []
library_name: transformers
tags:
  - mergekit
  - merge
  - llama
  - not-for-all-audiences
license: other

Silver-Sun-v2-11B

image/png

This is an updated version of Silver-Sun-11B. The change is that now the Solstice-FKL-v2-10.7B merge uses Sao10K/Fimbulvetr-11B-v2 instead of v1. Additionally, the config of the original Silver-Sun was wrong, and I have also updated that. As expected, this is a HIGHLY uncensored model. It should perform even better than the v1 due to the updated Fimb, and the fixed config.

Works with Alpaca, and from my tests, also ChatML. However Alpaca may be a better option. Try it out and use whatever works better for you. Due to a quirk with Solar, if you want the best quality either launch at 4K context, or launch at 8K (and possibly beyond - have not tested it that high) with 4k context pre-loaded in the prompt.

This model is intended for fictional storytelling and writing, focusing on NSFW capabilities and lack of censorship for RP reasons.

GGUF / IQ / Imatrix

Merge Details

This is a merge of pre-trained language models created using mergekit.

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

OpenLLM Eval Results

Detailed Results + Failed GSM8K

I had to remove GSM8K from the results and manually re-average the rest. GSM8K failed due to some issue with formatting, which is not experienced during practical usage. By removing the GSM8K score, the average is VERY close to upstage/SOLAR-10.7B-v1.0 (74.20), which would make sense. Feel free to ignore the actual average and use the other scores individually for reference.

Metric Value
Avg. 74.04
AI2 Reasoning Challenge (25-Shot) 69.88
HellaSwag (10-Shot) 87.81
MMLU (5-Shot) 66.74
TruthfulQA (0-shot) 62.49
Winogrande (5-shot) 83.27

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
      - model: ./MODELS/Solstice-FKL-v2-10.7B
        layer_range: [0, 48]
      - model: Himitsui/Kaiju-11B
        layer_range: [0, 48]
merge_method: slerp
base_model: ./MODELS/Solstice-FKL-v2-10.7B
parameters:
  t:
    - filter: self_attn
      value: [0.6, 0.7, 0.8, 0.9, 1]
    - filter: mlp
      value: [0.4, 0.3, 0.2, 0.1, 0]
    - value: 0.5
dtype: bfloat16