Phi-4-Stock-RP / README.md
bunnycore's picture
Update README.md
64d63bf verified
|
raw
history blame
4.89 kB
metadata
library_name: transformers
tags:
  - mergekit
  - merge
base_model:
  - bunnycore/Phi-4-Model-Stock
  - bunnycore/Phi-4-rp-v1-lora
model-index:
  - name: Phi-4-Stock-RP
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: IFEval (0-Shot)
          type: HuggingFaceH4/ifeval
          args:
            num_few_shot: 0
        metrics:
          - type: inst_level_strict_acc and prompt_level_strict_acc
            value: 63.99
            name: strict accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-4-Stock-RP
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: BBH (3-Shot)
          type: BBH
          args:
            num_few_shot: 3
        metrics:
          - type: acc_norm
            value: 55.21
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-4-Stock-RP
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MATH Lvl 5 (4-Shot)
          type: hendrycks/competition_math
          args:
            num_few_shot: 4
        metrics:
          - type: exact_match
            value: 32.25
            name: exact match
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-4-Stock-RP
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GPQA (0-shot)
          type: Idavidrein/gpqa
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 14.43
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-4-Stock-RP
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MuSR (0-shot)
          type: TAUR-Lab/MuSR
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 18.53
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-4-Stock-RP
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU-PRO (5-shot)
          type: TIGER-Lab/MMLU-Pro
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 47.96
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=bunnycore/Phi-4-Stock-RP
          name: Open LLM Leaderboard
license: mit

Phi-4-Stock-RP is a phi4 based language model designed for reasoning and role-play scenarios. It leverages the capabilities of several pre-existing high-quality models, integrating them into a cohesive system that excels in reasoning, creative, narrative, and interactive text generation.

Training Data:

  • Sources: Merged from various pre-trained models, focusing on those with strong performance in text generation and understanding. Enhanced with a specialized LoRA trained on role-play dialogues, scenarios, and character interactions. Model Capabilities:

  • Role-Playing: Capable of maintaining coherent characters, plots, and dialogues over extended interactions. Creative Writing: Assists in crafting stories, dialogues, and character development with a focus on immersion and narrative coherence. General Language Understanding: Inherits general text comprehension and generation from the base models, making it versatile for various language tasks beyond RP.

Merge Method

This model was merged using the passthrough merge method using bunnycore/Phi-4-Model-Stock + bunnycore/Phi-4-rp-v1-lora as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:



base_model: bunnycore/Phi-4-Model-Stock+bunnycore/Phi-4-rp-v1-lora
dtype: bfloat16
merge_method: passthrough
models:
  - model: bunnycore/Phi-4-Model-Stock+bunnycore/Phi-4-rp-v1-lora
tokenizer_source: unsloth/phi-4

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 38.73
IFEval (0-Shot) 63.99
BBH (3-Shot) 55.21
MATH Lvl 5 (4-Shot) 32.25
GPQA (0-shot) 14.43
MuSR (0-shot) 18.53
MMLU-PRO (5-shot) 47.96