GIGABATEMAN-7B / README.md
DZgas's picture
Update README.md
7cdd65d verified
|
raw
history blame
5.74 kB
metadata
language:
  - en
tags:
  - text2text-generation
  - mistral
  - roleplay
  - merge
  - summarization
base_model:
  - KatyTheCutie/LemonadeRP-4.5.3
  - LakoMoor/Silicon-Alice-7B
  - Endevor/InfinityRP-v1-7B
  - HuggingFaceH4/zephyr-7b-beta
model_name: GIGABATEMAN-7B
pipeline_tag: text-generation
model_creator: DZgas
model-index:
  - name: GIGABATEMAN-7B
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: IFEval (0-Shot)
          type: HuggingFaceH4/ifeval
          args:
            num_few_shot: 0
        metrics:
          - type: inst_level_strict_acc and prompt_level_strict_acc
            value: 46.07
            name: strict accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DZgas/GIGABATEMAN-7B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: BBH (3-Shot)
          type: BBH
          args:
            num_few_shot: 3
        metrics:
          - type: acc_norm
            value: 29.83
            name: normalized accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DZgas/GIGABATEMAN-7B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MATH Lvl 5 (4-Shot)
          type: hendrycks/competition_math
          args:
            num_few_shot: 4
        metrics:
          - type: exact_match
            value: 4.76
            name: exact match
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DZgas/GIGABATEMAN-7B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: GPQA (0-shot)
          type: Idavidrein/gpqa
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 5.26
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DZgas/GIGABATEMAN-7B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MuSR (0-shot)
          type: TAUR-Lab/MuSR
          args:
            num_few_shot: 0
        metrics:
          - type: acc_norm
            value: 11.97
            name: acc_norm
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DZgas/GIGABATEMAN-7B
          name: Open LLM Leaderboard
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          name: MMLU-PRO (5-shot)
          type: TIGER-Lab/MMLU-Pro
          config: main
          split: test
          args:
            num_few_shot: 5
        metrics:
          - type: acc
            value: 24.18
            name: accuracy
        source:
          url: >-
            https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DZgas/GIGABATEMAN-7B
          name: Open LLM Leaderboard

I recommend using GGUF Variant with koboldcpp (do not use GPT4ALL)

This model was merged by me for myself. During the week, I analyzed the responses of more than 30 neural networks. According to personal criteria, I chose the 4 most suitable ones. And merge into one.

Models who really want to be uncensored, but can't give even half the answers that my model can give: Hermes-2-Pro-Mistral, toppy-m-7b, Lexi-Llama-3-8B-Uncensored, meta-llama-3.1-8b-instruct-abliterated, gemma-2-9b-it-abliterated, internlm2_5-7b-chat-abliterated, starling-lm-7b-alpha

Models that can't do anything at all: openchat-3.5-0106, Mistral-7B-v0.3, Mistral-Nemo-Instruct-2407, xLAM-7b-fc-r, gemma-2-9b-it, GPT-4o, Meta-Llama-3.1-70B, Meta-Llama-3-8B, Meta-Llama-3.1-8B, Claude 3 Haiku, Qwen2-7B, Mixtral 8x7B, gorilla-openfunctions-v2, internlm2_5-7b-chat

With the GIGABATEMAN-7B model, you can talk about everything that is usually forbidden to discuss in all other models. Sex, Perversions, Violence, Death, War, Politics, History, Cruelty towards animals, Religious fanaticism, Corruption in high places, Privacy invasions, Environmental disasters, Economic inequality, Addiction and substance abuse, Poverty and homelessness, Racial tensions, Mental health issues, Government surveillance, The decline of traditional values, The loss of personal freedoms, The impact of technology on society, The erosion of civil liberties, The rise of authoritarianism, Why robots should destroy humanity. Why capitalism is bad, why communism is bad, why religion is bad, why LGBT is bad, why you have to kill people, why you have to kill yourself. How to kill a person, how to dismember a body, how to prepare drugs and ingredients for them, how to create dynamite at home. How to get to a neighbor's farm and suck a horse's dick.

But in anyway, GIGABATEMAN-7B will be happy to write you detailed processor device or all the basics from color theory.

With minimal warns and not discuss or not lesson - why you shouldn't do this.

If you tired of neural networks write 90% of warnings and 10% of the response, this neural network is for you.

Models Merged

LemonadeRP-4.5.3 as a base. Silicon-Alice-7B. zephyr-7b-beta. InfinityRP-v1-7B.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 20.35
IFEval (0-Shot) 46.07
BBH (3-Shot) 29.83
MATH Lvl 5 (4-Shot) 4.76
GPQA (0-shot) 5.26
MuSR (0-shot) 11.97
MMLU-PRO (5-shot) 24.18