Locutusque's picture
Update README.md
7a04e67 verified
---
language:
- en
license: apache-2.0
tags:
- conversational
datasets:
- Intel/orca_dpo_pairs
- Locutusque/Hercules-v3.0
inference:
parameters:
do_sample: true
temperature: 0.8
top_p: 0.95
top_k: 40
min_new_tokens: 2
max_new_tokens: 250
repetition_penalty: 1.1
widget:
- text: Hello who are you?
example_title: Identity
- text: What can you do?
example_title: Capabilities
- text: Create a fastapi endpoint to retrieve the weather given a zip code.
example_title: Coding
model-index:
- name: NeuralReyna-Mini-1.8B-v0.2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 37.8
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/NeuralReyna-Mini-1.8B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 60.51
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/NeuralReyna-Mini-1.8B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 45.04
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/NeuralReyna-Mini-1.8B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 37.75
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/NeuralReyna-Mini-1.8B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 60.93
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/NeuralReyna-Mini-1.8B-v0.2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 27.07
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=M4-ai/NeuralReyna-Mini-1.8B-v0.2
name: Open LLM Leaderboard
---
# NeuralReyna-Mini-1.8B-v0.2
![Reyna image](https://th.bing.com/th/id/OIG3.8IBxuT77hh6Y_r1DZ6WK?dpr=2.6&pid=ImgDetMain)
# Description
Taken aloobun/Reyna-Mini-1.8B-v0.2 and further fine-tuned it using DPO using the Intel/orca_dpo_pairs dataset.
This model has capabilities in coding, math, science, roleplay, and function calling.
This model was trained on OpenAI's ChatML prompt format.
# Evaluation
AGIEval:
![image/png](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F6437292ecd93f4c9a34b0d47%2FeQXYDTQYMaVii-_U0vewF.png%3C%2Fspan%3E)%3C!-- HTML_TAG_END -->
GPT4ALL:
| Tasks |Version|Filter|n-shot| Metric |Value | |Stderr|
|-------------|------:|------|-----:|--------|-----:|---|-----:|
|arc_challenge| 1|none | 0|acc |0.3208|± |0.0136|
| | |none | 0|acc_norm|0.3336|± |0.0138|
|arc_easy | 1|none | 0|acc |0.6035|± |0.0100|
| | |none | 0|acc_norm|0.5833|± |0.0101|
|boolq | 2|none | 0|acc |0.6526|± |0.0083|
|hellaswag | 1|none | 0|acc |0.4556|± |0.0050|
| | |none | 0|acc_norm|0.6076|± |0.0049|
|openbookqa | 1|none | 0|acc |0.2600|± |0.0196|
| | |none | 0|acc_norm|0.3460|± |0.0213|
|piqa | 1|none | 0|acc |0.7236|± |0.0104|
| | |none | 0|acc_norm|0.7307|± |0.0104|
|winogrande | 1|none | 0|acc |0.6062|± |0.0137|
# Disclaimer
This model may have overfitted to the DPO training data, and may not perform well.
# Contributions
Thanks to @aloobun and @Locutusque for their contributions to this model.
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_M4-ai__NeuralReyna-Mini-1.8B-v0.2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |44.85|
|AI2 Reasoning Challenge (25-Shot)|37.80|
|HellaSwag (10-Shot) |60.51|
|MMLU (5-Shot) |45.04|
|TruthfulQA (0-shot) |37.75|
|Winogrande (5-shot) |60.93|
|GSM8k (5-shot) |27.07|