File size: 6,361 Bytes
f82e018 67bd80c f82e018 d0e5a83 f82e018 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 |
---
license: apache-2.0
tags:
- OpenAccess AI Collective
- MPT
- axolotl
datasets:
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
- QingyiSi/Alpaca-CoT
- teknium/GPTeacher-General-Instruct
- metaeval/ScienceQA_text_only
- hellaswag
- openai/summarize_from_feedback
- riddle_sense
- gsm8k
- camel-ai/math
- camel-ai/biology
- camel-ai/physics
- camel-ai/chemistry
- winglian/evals
inference: false
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
**[💵 Donate to OpenAccess AI Collective](https://github.com/sponsors/OpenAccess-AI-Collective) to help us keep building great tools and models!**
# Minotaur 7B
Minotaur 7B is an instruct fine-tuned model on top of LlaMA-13B. Minotaur 7B is fine-tuned **on only completely open datasets** making this model reproducible by anyone.
Questions, comments, feedback, looking to donate, or want to help? Reach out on our [Discord](https://discord.gg/PugNNHAF5r) or email [[email protected]](mailto:[email protected])
# Prompts
Chat only style prompts using `USER:`,`ASSISTANT:`.
<img src="https://huggingface.co/openaccess-ai-collective/minotaur-7b/resolve/main/minotaur.png" alt="minotaur" width="600" height="500"/>
# Training Datasets
Minotaur 7B model is fine-tuned on the following openly available datasets:
- [WizardLM](https://huggingface.co/datasets/ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered)
- [subset of QingyiSi/Alpaca-CoT for roleplay and CoT](https://huggingface.co/QingyiSi/Alpaca-CoT)
- [GPTeacher-General-Instruct](https://huggingface.co/datasets/teknium/GPTeacher-General-Instruct)
- [metaeval/ScienceQA_text_only](https://huggingface.co/datasets/metaeval/ScienceQA_text_only) - instruct for concise responses
- [openai/summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback) - instruct augmented tl;dr summarization
- [camel-ai/math](https://huggingface.co/datasets/camel-ai/math)
- [camel-ai/physics](https://huggingface.co/datasets/camel-ai/physics)
- [camel-ai/chemistry](https://huggingface.co/datasets/camel-ai/chemistry)
- [camel-ai/biology](https://huggingface.co/datasets/camel-ai/biology)
- [winglian/evals](https://huggingface.co/datasets/winglian/evals) - instruct augmented datasets
- custom sysnthetic datasets around misconceptions, in-context qa, jokes, N-tasks problems, and context-insensitivity
- ARC-Easy & ARC-Challenge - instruct augmented for detailed responses, derived from the `train` split
- [hellaswag](https://huggingface.co/datasets/hellaswag) - 30K+ rows of instruct augmented for detailed explanations w 30K+ rows, derived from the `train` split
- [riddle_sense](https://huggingface.co/datasets/riddle_sense) - instruct augmented, derived from the `train` split
- [gsm8k](https://huggingface.co/datasets/gsm8k) - instruct augmented, derived from the `train` split
- prose generation
# Shoutouts
Special thanks to Nanobit for helping with Axolotl and TheBloke for quantizing these models are more accessible to all.
# Demo
HF Demo in Spaces available in the [Community ChatBot Arena](https://huggingface.co/spaces/openaccess-ai-collective/rlhf-arena) under the OAAIC Chatbots tab.
## Release Notes
- https://wandb.ai/wing-lian/minotaur-13b/runs/ucgd0x6n
## Build
Minotaur was built with [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl) on 8x RTXA6000
- 1 epochs taking approximately 1.5 hours
## Bias, Risks, and Limitations
Minotaur has not been aligned to human preferences with techniques like RLHF or deployed with in-the-loop filtering of responses like ChatGPT, so the model can produce problematic outputs (especially when prompted to do so).
Minotaur was fine-tuned from the base model LLaMA 13B, please refer to its model card's Limitations Section for relevant information. (included below)
## Benchmarks
hf-causal-experimental (pretrained=openaccess-ai-collective/minotaur-7b), limit: None, provide_description: False, num_fewshot: 0, batch_size: None
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.4078|_ |0.0144|
| | |acc_norm|0.4292|_ |0.0145|
|arc_easy | 0|acc |0.7424|_ |0.0090|
| | |acc_norm|0.6877|_ |0.0095|
|boolq | 1|acc |0.7679|_ |0.0074|
|hellaswag | 0|acc |0.5636|_ |0.0049|
| | |acc_norm|0.7377|_ |0.0044|
|openbookqa | 0|acc |0.3540|_ |0.0214|
| | |acc_norm|0.4420|_ |0.0222|
|piqa | 0|acc |0.7878|_ |0.0095|
| | |acc_norm|0.7851|_ |0.0096|
|winogrande | 0|acc |0.7072|_ |0.0128|
## Examples - results may vary based on temperature and other settings
```
The following is a chat between a USER and a friendly and helpful ASSISTANT.
USER: You have 5 apples. You eat 3 pears. How many apples do you have left?
ASSISTANT:You still have 5 apples because eating pears does not affect the number of apples you have.
```
```
The following is a chat between a USER and a friendly and helpful ASSISTANT.
USER: List three words that start with "ex". What is 33 + 77?
ASSISTANT:Three words starting with the letter 'e' are excited, eatable, and expensive. 33+77 equals 106.
```
```
The following is a chat between a USER and a friendly and helpful ASSISTANT.
USER: Solve for x: 5x = 3x + 10
ASSISTANT:To solve the equation 5x = 3x + 10, we can follow these steps:
Step 1: Add both sides of the equation to get a common denominator.
(1/5)x = 1 (common)
Subtract the term with "x" on both sides.
-2/5 x = 10
Step 2: To solve the resulting equation, isolate the variable "x".
First, multiply all terms by 5 to eliminate the fractions.
5x = 15
Now, divide the entire right side by 5.
x = 15/5
Finally, simplify to get the solution in its simplest form.
x = 1
So, the solution is x = 1.
```
Yup, 7B still sucks at math
|