Update README.md
Browse files
README.md
CHANGED
@@ -31,13 +31,16 @@ BRAINSTORM process was developed by David_AU.
|
|
31 |
|
32 |
<B>What is Brainstorm?</b>
|
33 |
|
34 |
-
The reasoning center of an LLM is taken apart, reassembled, expanded and multiplied by 5x and 10x respectively.
|
35 |
-
|
36 |
-
|
|
|
37 |
|
38 |
-
|
39 |
|
40 |
-
|
|
|
|
|
41 |
|
42 |
The core aim of this process is to increase the model's detail, concept of "world", prose quality and prose length without affecting
|
43 |
instruction following. This will also affect any creative usage of any kind, including "brainstorming" and like case uses.
|
@@ -46,11 +49,9 @@ You could say this process sharpens the model's focus on it's task(s) at a deepe
|
|
46 |
|
47 |
There are examples below showing "regular", "5x" and "10x" output below.
|
48 |
|
49 |
-
This is experimental technology, so it might have a few "warts" and "bumps".
|
50 |
-
|
51 |
This tech has been tested on multiple LLama2, Llama3, and Mistral models.
|
52 |
|
53 |
-
More testing and
|
54 |
|
55 |
<B>NOTES:</B>
|
56 |
|
|
|
31 |
|
32 |
<B>What is Brainstorm?</b>
|
33 |
|
34 |
+
The reasoning center of an LLM is taken apart, reassembled, expanded and multiplied by 5x and 10x respectively.
|
35 |
+
Then these centers are individually calibrated. These "centers" also interact with each other. This introduces
|
36 |
+
subtle changes into the process. The calibrations further adjust - dial up or down - this process further. The
|
37 |
+
number of centers (5x,10x) allow even more tuning points to further customize how it reasons so to speak.
|
38 |
|
39 |
+
It is not in my opinion 5x or 10x "smarter" - if only that was true!
|
40 |
|
41 |
+
From testing it seems to ponder, and consider more carefully roughly speaking.
|
42 |
+
|
43 |
+
The process to modify the model occurs at the root level - source files level. The model can quanted as a GGUF, EXL2, AWQ etc etc.
|
44 |
|
45 |
The core aim of this process is to increase the model's detail, concept of "world", prose quality and prose length without affecting
|
46 |
instruction following. This will also affect any creative usage of any kind, including "brainstorming" and like case uses.
|
|
|
49 |
|
50 |
There are examples below showing "regular", "5x" and "10x" output below.
|
51 |
|
|
|
|
|
52 |
This tech has been tested on multiple LLama2, Llama3, and Mistral models.
|
53 |
|
54 |
+
More testing and development is underway and there are additional sizes other than "5x" and "10x".
|
55 |
|
56 |
<B>NOTES:</B>
|
57 |
|