merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the DARE TIES merge method using CultriX/SeQwence-14Bv1 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: CultriX/SeQwence-14Bv1
    parameters:
      weight: 0.18
      density: 0.6
  - model: CultriX/Qwen2.5-14B-Wernickev3
    parameters:
      weight: 0.18
      density: 0.6
  - model: allknowingroger/QwenSlerp6-14B
    parameters:
      weight: 0.18
      density: 0.6
  - model: CultriX/Qwen2.5-14B-Unity
    parameters:
      weight: 0.12
      density: 0.55
  - model: qingy2019/Qwen2.5-Math-14B-Instruct
    parameters:
      weight: 0.1
      density: 0.55
  - model: sometimesanotion/Qwen2.5-14B-Vimarckoso
    parameters:
      weight: 0.1
      density: 0.55
  - model: CultriX/Qwen2.5-14B-Emergedv3
    parameters:
      weight: 0.1
      density: 0.55
  - model: CultriX/SeQwence-14B-EvolMerge
    parameters:
      weight: 0.1
      density: 0.55
merge_method: dare_ties
base_model: CultriX/SeQwence-14Bv1
parameters:
  normalize: true
  int8_mask: true
dtype: bfloat16
adaptive_merge_parameters:
  task_weights:
    IFEval: 1.6
    BBH: 1.8
    MATH: 1.6
    GPQA: 1.7
    MUSR: 1.7
    MMLU-PRO: 1.6
  smoothing_factor: 0.23
gradient_clipping: 0.85
Downloads last month
152
Safetensors
Model size
14.8B params
Tensor type
BF16
ยท
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for CultriX/Qwenfinity-2.5-14B

Space using CultriX/Qwenfinity-2.5-14B 1