qwen14-experimental-alt-v2

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the TIES merge method using ToastyPigeon/Qwen2.5-14B-Instruct-1M-Unalign as a base.

Models Merged

The following models were included in the merge:

  • output/tq14b-adventure-alt
  • output/tq14b-fujin-alt
  • output/tq14b-rp-alt
  • output/tq14b-roselily-alt
  • output/tq14b-gutenberg-alt

Configuration

The following YAML configuration was used to produce this model:

base_model: ToastyPigeon/Qwen2.5-14B-Instruct-1M-Unalign
merge_method: ties
slices:
- sources:
  - layer_range: [0, 48]
    model: output/tq14b-rp-alt
    parameters:
      density: 0.8
      weight: 0.4
  - layer_range: [0, 48]
    model: output/tq14b-roselily-alt
    parameters:
      density: 0.3
      weight: 0.2
  - layer_range: [0, 48]
    model: output/tq14b-fujin-alt
    parameters:
      density: 0.5
      weight: 0.2
  - layer_range: [0, 48]
    model: output/tq14b-gutenberg-alt
    parameters:
      density: 0.7
      weight: 0.2
  - layer_range: [0, 48]
    model: output/tq14b-adventure-alt
    parameters:
      density: 0.5
      weight: 0.1
  - layer_range: [0, 48]
    model: ToastyPigeon/Qwen2.5-14B-Instruct-1M-Unalign
Downloads last month
2
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for ReadyArt/qwen14-1m-test-model-v0_EXL2_6.0bpw_H8

Collection including ReadyArt/qwen14-1m-test-model-v0_EXL2_6.0bpw_H8