Eclectic-Maid-7B-v2 / README.md
ND911's picture
Update README.md
080a930 verified
metadata
base_model: []
library_name: transformers
tags:
  - mergekit
  - merge
  - not-for-all-audiences

Eclectic-Maid-7B-v2

This model is much better over original model, I believe. The recipes are not exact below but at least you get an idea of what models are involved. This is a merge of pre-trained language models created using mergekit.

Merge Details

See Below

Merge Method

This model was merged using the passthrough merge method.

Models Merged

See Below

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
    - model: "Maid-Reborn-v22-10B"
      layer_range: [0, 16]
  - sources:
    - model: "Maid-Reborn-v22-10B"
      layer_range: [8, 24]
  - sources:
    - model: "Maid-Reborn-v22-10B"
      layer_range: [17, 32]
merge_method: passthrough
dtype: float16
models:
  - model: mistralai/Mistral-7B-v0.1
    # No parameters necessary for base model
  - model: cognitivecomputations/WestLake-7B-v2-laser
    parameters:
      density: 0.58
      weight:  [0.3877, 0.1636, 0.186, 0.0502]
  - model: senseable/garten2-7b
    parameters:
      density: 0.58
      weight:  [0.234, 0.2423, 0.2148, 0.2775]
  - model: berkeley-nest/Starling-LM-7B-alpha
    parameters:
      density: 0.58
      weight:  [0.1593, 0.1573, 0.1693, 0.3413]
  - model: mlabonne/AlphaMonarch-7B
    parameters:
      density: 0.58
      weight:  [0.219, 0.4368, 0.4299, 0.331]
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
  int8_mask: true
dtype: bfloat16
name: Maid-Reborn-v22
models: 
  - model: mistralai/Mistral-7B-Instruct-v0.2
    # no parameters necessary for base model
  - model: xDAN-AI/xDAN-L1-Chat-RL-v1
    parameters:
      weight: 0.4
      density: 0.8
  - model: Undi95/BigL-7B
    parameters:
      weight: 0.3
      density: 0.8
  - model: SanjiWatsuki/Loyal-Toppy-Bruins-Maid-7B-DARE
    parameters:
      weight: 0.2
      density: 0.4
  - model: NeverSleep/Noromaid-7B-0.4-DPO
    parameters:
      weight: 0.2
      density: 0.4
  - model: NSFW_DPO_Noromaid-7B-v2
    parameters:
      weight: 0.2
      density: 0.4
merge_method: dare_ties
base_model: mistralai/Mistral-7B-Instruct-v0.2
parameters:
  int8_mask: true
dtype: bfloat16