matchaaaaa
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -6,11 +6,14 @@ tags:
|
|
6 |
- merge
|
7 |
|
8 |
---
|
9 |
-
# Tiramisu-12B
|
10 |
|
11 |
-
This is a
|
12 |
|
13 |
## Merge Details
|
|
|
|
|
|
|
14 |
### Merge Method
|
15 |
|
16 |
This model was merged using the linear [DARE](https://arxiv.org/abs/2311.03099) merge method using flammenai/Mahou-1.3-mistral-nemo-12B as a base.
|
@@ -21,6 +24,7 @@ The following models were included in the merge:
|
|
21 |
* nbeerbower/mistral-nemo-gutenberg-12B-v4
|
22 |
* Sao10K/MN-12B-Lyra-v1
|
23 |
* Gryphe/Pantheon-RP-1.5-12b-Nemo
|
|
|
24 |
|
25 |
### Configuration
|
26 |
|
@@ -56,3 +60,5 @@ slices:
|
|
56 |
- value: [0.2, 0.15, 0.2, 0.3, 0.4]
|
57 |
tokenizer_source: union
|
58 |
```
|
|
|
|
|
|
6 |
- merge
|
7 |
|
8 |
---
|
9 |
+
# MN-Tiramisu-12B
|
10 |
|
11 |
+
This is a really yappity-yappy yapping model that's good for long-form RP. Tried to rein it in with Mahou and give it some more character understanding with Pantheon. Feedback is always welcome.
|
12 |
|
13 |
## Merge Details
|
14 |
+
|
15 |
+
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
|
16 |
+
|
17 |
### Merge Method
|
18 |
|
19 |
This model was merged using the linear [DARE](https://arxiv.org/abs/2311.03099) merge method using flammenai/Mahou-1.3-mistral-nemo-12B as a base.
|
|
|
24 |
* nbeerbower/mistral-nemo-gutenberg-12B-v4
|
25 |
* Sao10K/MN-12B-Lyra-v1
|
26 |
* Gryphe/Pantheon-RP-1.5-12b-Nemo
|
27 |
+
* flammenai/Mahou-1.3-mistral-nemo-12B
|
28 |
|
29 |
### Configuration
|
30 |
|
|
|
60 |
- value: [0.2, 0.15, 0.2, 0.3, 0.4]
|
61 |
tokenizer_source: union
|
62 |
```
|
63 |
+
|
64 |
+
And as always, have a great day!
|