YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

HX-Mistral-3B_v0.1 - bnb 4bits

Original model description:

base_model:

  • mistralai/Mistral-7B-Instruct-v0.2 library_name: transformers tags:
  • mergekit
  • merge

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the linear merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

dtype: float16
merge_method: linear
slices:
 - sources:
      - layer_range: [0, 16] # Assuming the first half of the model is more general and can be reduced more
        model: mistralai/Mistral-7B-Instruct-v0.2
        parameters:
          weight: 0.5 # Reduce the weight of the first half to make room for the second half
      - layer_range: [16, 32] # Assuming the second half of the model is more specialized and can be reduced less
        model: mistralai/Mistral-7B-Instruct-v0.2
        parameters:
          weight: 0.5 # Maintain the weight of the second half
Downloads last month
4
Safetensors
Model size
2.06B params
Tensor type
F32
FP16
U8
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.