wizard-elem-to-32k-7B

This is a merge of pre-trained language models created using mergekit.

In theory, context length has been extended to 32K tokens. In practice? Degradation above 8K context length.

Tested with ChatML instruct prompts, temperature 1.0, and minP 0.01, but feel free to experiment.

Merge Details

Merge Method

This model was merged using the task arithmetic merge method using grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B
dtype: bfloat16
merge_method: task_arithmetic
slices:
- sources:
  - layer_range: [0, 32]
    model: grimjim/Mistral-7B-Instruct-demi-merge-v0.2-7B
  - layer_range: [0, 32]
    model: lucyknada/microsoft_WizardLM-2-7B
    parameters:
      weight: 1.00
Downloads last month
1
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for grimjim/wizard-elem-to-32k-7B

Collection including grimjim/wizard-elem-to-32k-7B