Llama-3.X-Workout-70B

image/png

This is a merge of pre-trained language models created using mergekit.

Doomer but probably Gooder. Rest of the numbers are guessed up.

This will probably be the last model I bad mix for a while. Going to touch grass in another country.

Merge Details

Merge Method

This model was merged using the TIES merge method using SicariusSicariiStuff/Negative_LLAMA_70B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Blackroot/Mirai-3.0-70B
    parameters:
      density: 0.2
      weight: 0.5
  - model: nbeerbower/Llama-3.1-Nemotron-lorablated-70B
    parameters:
      density: 1
      weight: 0.25
  - model: Doctor-Shotgun/L3.3-70B-Magnum-v4-SE
    parameters:
      density: 0.3
      weight: 0.5
  - model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
    parameters:
      density: 0.75
      weight: 0.5
  - model: TheDrummer/Anubis-70B-v1
    parameters:
      density: 0.351
      weight: 0.751
  - model: Sao10K/L3.3-70B-Euryale-v2.3
    parameters:
      density: 0.420
      weight: 0.679
  - model: Sao10K/70B-L3.3-Cirrus-x1
    parameters:
      density: 0.43
      weight: 0.3
  - model: nitky/Llama-3.3-SuperSwallowX-70B-Instruct-v0.1
    parameters:
      density: 0.25
      weight: 0.2
  - model: Undi95/Sushi-v1.4
    parameters:
      density: 0.1457
      weight: 0.69
  - model: pankajmathur/orca_mini_v9_3_70B
    parameters:
      density: 0.2
      weight: 0.2

merge_method: ties
base_model: SicariusSicariiStuff/Negative_LLAMA_70B
parameters:
  normalize: true
dtype: bfloat16
Downloads last month
0
Safetensors
Model size
70.6B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for KaraKaraWitch/Llama-3.X-Workout-70B