Text Generation
Transformers
Safetensors
mistral
text-generation-inference
unsloth
Mistral_Star
Mistral_Quiet
Mistral
Mixtral
Question-Answer
Token-Classification
Sequence-Classification
SpydazWeb-AI
chemistry
biology
legal
code
climate
medical
LCARS_AI_StarTrek_Computer
chain-of-thought
tree-of-knowledge
forest-of-thoughts
visual-spacial-sketchpad
alpha-mind
knowledge-graph
entity-detection
encyclopedia
wikipedia
stack-exchange
Reddit
Cyber-series
MegaMind
Cybertron
SpydazWeb
Spydaz
LCARS
star-trek
mega-transformers
Mulit-Mega-Merge
Multi-Lingual
Afro-Centric
African-Model
Ancient-One
Inference Endpoints
Update README.md
Browse files
README.md
CHANGED
@@ -99,6 +99,15 @@ language:
|
|
99 |
- bm
|
100 |
- su
|
101 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
102 |
|
103 |
# "Success comes from defining each task in achievable steps. Every completed step is a success that brings you closer to your goal. If your steps are unreachable, failure is inevitable. Winners create more winners, while losers do the opposite. Success is a game of winners!"
|
104 |
|
|
|
99 |
- bm
|
100 |
- su
|
101 |
---
|
102 |
+
## LeroyDyer/SpydazWeb_AI_HumanAI_011_INSTRUCT
|
103 |
+
Based on the instruct Prompt, this model has been trained without a prompt ! simply the alpaca dataset . this is a part of the realignment program : the datasets have all been installed using indepth and very heavy prompting techniques utilizing various styles of prompts , but this does not always allow for simple prompting to retrievd the known infromation : SO we rtealign the model to the same datasets using an empty prompt :
|
104 |
+
Checking our data until it indeed fits under 0.6 we would ike to even get it lower but as the model has been trained on this data before it is not required as there may even be many prompts and prompt setups which will reveal the same information :
|
105 |
+
I have found that in benchmarking it has not perfromed well ? despite being heavily trained ansd this may bedue to the prompt type used by the bechmarkers being very basic and generic and even opossing to the models personality and role based replys as well as its internal agent generation and reasoning processes !
|
106 |
+
So to make th emodel a little more reactive to genenric prompts we realign the model with its open prompt . Hopefully regeneralising the model tensors : Our statistics in the tensors cannot be removed so we need to understadn that the large prompting served the purpose of reassociation as input = output ! ie the embeddings and attention is often based on the search query and not the otential output ~!
|
107 |
+
So we have embedded knowledge at various level from 1b params , to less, to lmhead only training !... So we spread the knowledge across the stack of tensors as this is what we are manageoing in the end !
|
108 |
+
Hopefully we have simplyfied this model !
|
109 |
+
|
110 |
+
|
111 |
|
112 |
# "Success comes from defining each task in achievable steps. Every completed step is a success that brings you closer to your goal. If your steps are unreachable, failure is inevitable. Winners create more winners, while losers do the opposite. Success is a game of winners!"
|
113 |
|