async0x42's picture
Upload 13 files
8e9ce1b verified
metadata
base_model:
  - nbeerbower/Llama-3.1-Nemotron-lorablated-70B
  - SicariusSicariiStuff/Negative_LLAMA_70B
  - TheDrummer/Anubis-70B-v1
  - EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
  - deepseek-ai/DeepSeek-R1-Distill-Llama-70B
  - Sao10K/L3.3-70B-Euryale-v2.3
library_name: transformers
tags:
  - mergekit
  - merge
model-index:
  - name: L3.3-Nevoria-R1-70b
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: IFEval (0-Shot)
          type: wis-k/instruction-following-eval
          split: train
          args:
            num_few_shot: 0
        metrics:
          - type: inst_level_strict_acc and prompt_level_strict_acc
            value: 60.24
            name: averaged accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Steelskull%2FL3.3-Nevoria-R1-70b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: BBH (3-Shot)
          type: SaylorTwift/bbh
          split: test
          args:
            num_few_shot: 3
        metrics:
          - type: acc_norm
            value: 56.17
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Steelskull%2FL3.3-Nevoria-R1-70b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MATH Lvl 5 (4-Shot)
          type: lighteval/MATH-Hard
          split: test
          args:
            num_few_shot: 4
        metrics:
          - type: exact_match
            value: 46.68
            name: exact match
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Steelskull%2FL3.3-Nevoria-R1-70b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GPQA (0-shot)
          type: Idavidrein/gpqa
          split: train
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 29.19
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Steelskull%2FL3.3-Nevoria-R1-70b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MuSR (0-shot)
          type: TAUR-Lab/MuSR
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 20.19
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Steelskull%2FL3.3-Nevoria-R1-70b
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU-PRO (5-shot)
          type: TIGER-Lab/MMLU-Pro
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 49.59
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Steelskull%2FL3.3-Nevoria-R1-70b
          name: Open LLM Leaderboard

L3.3-Nevoria-R1-70b

Model banner

Model Information

L3.3-Nevoria-R1-70b

L3.3 = Llama 3.3 R1 = DeepSeek-R1 70b Parameters

Model Composition

This model builds upon the original Nevoria foundation, incorporating the Deepseek-R1 reasoning architecture to enhance dialogue interaction and scene comprehension. While maintaining Nevoria's core strengths in storytelling and scene description (derived from EVA, EURYALE, and Anubis), this iteration aims to improve prompt adherence and creative reasoning capabilities. The model also retains the balanced perspective introduced by Negative_LLAMA and Nemotron elements. Also, the model plays the card to almost a fault, It'll pick up on minor issues and attempt to run with them. Users had it call them out for misspelling a word while playing in character.

Note: While Nevoria-R1 represents a significant architectural change, rather than a direct successor to Nevoria, it operates as a distinct model with its own characteristics.

The lorablated model base choice was intentional, creating unique weight interactions similar to the original Astoria model and Astoria V2 model. This "weight twisting" effect, achieved by subtracting the lorablated base model during merging, creates an interesting balance in the model's behavior. While unconventional compared to sequential component application, this approach was chosen for its unique response characteristics.

Open LLM-Benchmark Results:

Average Score: 43.68% View Full Leaderboard β†’
IFEval 60.24%
BBH 56.17%
MATH 46.68%
GPQA 29.19%
MUSR 20.19%
MMLU-Pro 49.59%

Reccomended Templates & Prompts

Quantized Versions

GGUF Quantizations

EXL2 Quantizations

Support the Project:

Support on Ko-fi