Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,116 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
datasets:
|
4 |
+
- fblgit/tree-of-knowledge
|
5 |
+
- Open-Orca/SlimOrca-Dedup
|
6 |
+
- HuggingFaceH4/ultrafeedback_binarized
|
7 |
+
library_name: transformers
|
8 |
+
tags:
|
9 |
+
- juanako
|
10 |
+
- UNA
|
11 |
+
- cybertron
|
12 |
+
- fbl
|
13 |
+
---
|
14 |
+
|
15 |
+
# Model Card for una-cybertron-7b-v2-bf16 (UNA: Uniform Neural Alignment)
|
16 |
+
|
17 |
+
We strike back, introducing **Cybertron 7B v1** a 7B MistralAI based model, best on it's series. Trained on SFT, DPO and UNA (Unified Neural Alignment) on multiple datasets.
|
18 |
+
He scores **64.60**+ on HF LeaderBoard at least, we'll update the final test soon, .. and we have in the oven a few surprises for all the christmas, subscribe.
|
19 |
+
T
|
20 |
+
* v1 Scoring **#1** at 2 December 2023 with 64.60
|
21 |
+
* v2 Scoring **?** ..?
|
22 |
+
|
23 |
+
|
24 |
+
| Model | Average | ARC (25-s) | HellaSwag (10-s) | MMLU (5-s) | TruthfulQA (MC) (0-s) | Winogrande (5-s) | GSM8K (5-s) |
|
25 |
+
| --- | --- | --- | --- | --- | --- | --- | --- |
|
26 |
+
| [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) | 60.97 | 59.98 | 83.31 | 64.16 | 42.15 | 78.37 | 37.83 |
|
27 |
+
| [perlthoughts/Chupacabra-7B-v2](https://huggingface.co/perlthoughts/Chupacabra-7B-v2) | 63.54 | 66.47 | 85.17 | 64.49 | 57.6 | 79.16 | 28.35 |
|
28 |
+
| [fblgit/una-cybertron-7b-v1-fp16](https://huggingface.co/fblgit/una-cybertron-7b-v1-fp16) | **64.60** | **68.17** | 85.14 | 62.07 | **63.98** | **80.9** | 27.34 |
|
29 |
+
| [fblgit/una-cybertron-7b-v2-bf16](https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16) | **6?.?0** | **68.17** | 85.?4 | 62.07 | **6?.98** | **80.9** | ?0.34 |
|
30 |
+
|
31 |
+
The model excels in mathematics, logic, reasoning, overall very smart.
|
32 |
+
|
33 |
+
## Model Details
|
34 |
+
|
35 |
+
Adiestrated with UNA: Uniform Neural Alignment technique (paper going out soon).
|
36 |
+
|
37 |
+
### Model Description
|
38 |
+
|
39 |
+
- **Developed by:** [juanako.ai](https://juanako.ai)
|
40 |
+
- **Author:** [Xavier M.]([email protected])
|
41 |
+
- **Model type:** MistralAI 7B
|
42 |
+
- **Funded by Cybertron's H100's**
|
43 |
+
|
44 |
+
### Prompt
|
45 |
+
The model is very good, works well on almost any prompt but ChatML format and Alpaca System gets the best
|
46 |
+
```
|
47 |
+
<|im_start|>system
|
48 |
+
- You are a helpful assistant chatbot trained by MosaicML.
|
49 |
+
- You answer questions.
|
50 |
+
- You are excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
|
51 |
+
- You are more than just an information source, you are also able to write poetry, short stories, and make jokes.<|im_end|>
|
52 |
+
<|im_start|>user
|
53 |
+
Explain QKV<|im_end|>
|
54 |
+
<|im_start|>assistant
|
55 |
+
```
|
56 |
+
```
|
57 |
+
### Assistant: I am StableVicuna, a large language model created by CarperAI. I am here to chat!
|
58 |
+
|
59 |
+
### Human: Explain QKV
|
60 |
+
### Assistant:
|
61 |
+
```
|
62 |
+
```
|
63 |
+
[Round <|round|>]
|
64 |
+
问:Explain QKV
|
65 |
+
答:
|
66 |
+
```
|
67 |
+
```
|
68 |
+
[Round <|round|>]
|
69 |
+
Question:Explain QKV
|
70 |
+
Answer:
|
71 |
+
```
|
72 |
+
```
|
73 |
+
Question:Explain QKV
|
74 |
+
Answer:
|
75 |
+
```
|
76 |
+
|
77 |
+
## Evaluation (UNA-Cybertron-7B-v1-fp16)
|
78 |
+
```
|
79 |
+
| Tasks |Version|Shots | Metric |Value | |Stderr|
|
80 |
+
|--------------|-------|------|--------qqqqqqqqqqqqqqqqqqqqq|-----:|---|-----:|
|
81 |
+
|arc_challenge | | 25 |acc_norm|0.6817|± |0.0136|
|
82 |
+
|truthfulqa_mc2| | 0 |acc |0.6398|± |0.0151|
|
83 |
+
|hellaswag | | 10 |acc_norm|0.8492|± |0.0036|
|
84 |
+
|winogrande | | 0 |acc |0.809 |± |0.011 |
|
85 |
+
|gsm8k | | 5 |acc |0.2733|± |0.0137|
|
86 |
+
|mmlu | | 5 |acc |0.6207|± |0.1230|
|
87 |
+
| |average| |acc |**0.6456**| |
|
88 |
+
|
89 |
+
| Groups |Version|Filter|n-shot|Metric|Value | |Stderr|
|
90 |
+
|------------------|-------|------|-----:|------|-----:|---|-----:|
|
91 |
+
|mmlu |N/A |none | 0|acc |0.6207|_ |0.1230|
|
92 |
+
| - humanities |N/A |none | 5|acc |0.5675|_ |0.1125|
|
93 |
+
| - other |N/A |none | 5|acc |0.6933|_ |0.1108|
|
94 |
+
| - social_sciences|N/A |none | 5|acc |0.7270|_ |0.0666|
|
95 |
+
| - stem |N/A |none | 5|acc |0.5249|_ |0.1311|
|
96 |
+
```
|
97 |
+
|
98 |
+
### Framework versions
|
99 |
+
|
100 |
+
- Transformers 4.35.0-UNA
|
101 |
+
- Pytorch 2.1.0
|
102 |
+
- Datasets 2.14.6
|
103 |
+
- Tokenizers 0.14.1
|
104 |
+
|
105 |
+
### Citations
|
106 |
+
If you find Cybertron, Juanako or any of our models useful, specially if you use it for your big brand.. cite please:
|
107 |
+
```
|
108 |
+
@misc{unacybertron7a,
|
109 |
+
title={Cybertron: Uniform Neural Alignment},
|
110 |
+
author={Xavier Murias},
|
111 |
+
year={2023},
|
112 |
+
publisher = {HuggingFace},
|
113 |
+
journal = {HuggingFace repository},
|
114 |
+
howpublished = {\url{https://huggingface.co/fblgit/una-cybertron-7b-v1}},
|
115 |
+
}
|
116 |
+
```
|