trollek commited on
Commit
c28bb20
·
verified ·
1 Parent(s): d394317

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +262 -0
README.md CHANGED
@@ -2,4 +2,266 @@
2
  license: apache-2.0
3
  license_name: a
4
  license_link: LICENSE
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: apache-2.0
3
  license_name: a
4
  license_link: LICENSE
5
+ datasets:
6
+ - m-a-p/Code-Feedback
7
+ - HuggingFaceTB/cosmopedia-100k
8
+ - LDJnr/Capybara
9
+ - vicgalle/alpaca-gpt4
10
+ - glaiveai/glaive-code-assistant-v2
11
+ - WhiteRabbitNeo/WRN-Chapter-1
12
+ - WhiteRabbitNeo/WRN-Chapter-2
13
+ - m-a-p/CodeFeedback-Filtered-Instruction
14
+ - HuggingFaceH4/OpenHermes-2.5-1k-longest
15
+ - jondurbin/airoboros-3.2
16
+ - euclaise/WritingPrompts_curated
17
+ - derek-thomas/squad-v1.1-t5-question-generation
18
+ - reinforz/question_generation_data
19
+ - teknium/GPTeacher-General-Instruct
20
+ - dim/roleplay_instruct_v2_final
21
+ - TIGER-Lab/MathInstruct
22
+ - abacusai/SystemChat
23
+ language:
24
+ - en
25
+ library_name: transformers
26
+ tags:
27
+ - code
28
  ---
29
+ # Model Card for NinjaMouse-32l-danube
30
+
31
+ <!-- Provide a quick summary of what the model is/does. -->
32
+
33
+ A lanky version of [h2o-danube]'s tiny language model, stretched from 24 layers to 32. I have done this in steps, adding 2 new layers per step and training them on different datasets.
34
+
35
+ ## Model Details
36
+
37
+ ### Model Description
38
+
39
+ <!-- Provide a longer summary of what this model is. -->
40
+
41
+ One of the datasets I used to train this model was WhiteRabbitNeo [chapter 1](https://huggingface.co/datasets/WhiteRabbitNeo/WRN-Chapter-1) and [chapter 2](WhiteRabbitNeo/WRN-Chapter-2) thereby agreeing to their extented Apache 2 license. If you use this model, a derivative, you should read their terms as they are quite reasonable and could also be called the "Don't be a dick" clause (see out of scope section).
42
+
43
+ With the important things covered, let us cover the model.
44
+
45
+ I wanted a model that could construct Stable Diffusion prompts without the "trending on artstation, 8k uhd, dramatic lighting, detailed, masterpiece" spam. My solution got a little out of hand, when trying out *deep* block expansion and QLoRA on the TinyLlama model, which failed but lead to this. A natty 16k context window, can be trained using Unsloth, and seems to be a lot more coherent than both TinyLlama and Phi2.
46
+
47
+ My thoughts going in to this was "If I use WRN in the training I get to call it something related to The Matrix" and "These Stable Diffusion prompt datasets need Geepus."
48
+
49
+
50
+ - **Developed by:** Trolle Karlsson
51
+ - **Model type:** Mistral
52
+ - **Language(s) (NLP):** English
53
+ - **License:** Apache-2.0 + WhiteRabbitNeo Extended Version
54
+ - **Finetuned from model:** [h2o-danube](https://huggingface.co/h2oai/h2o-danube-1.8b-chat)
55
+
56
+
57
+ ## Uses
58
+
59
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
60
+
61
+ Imagine having a model going through an entire book, page by page, creating SDXL prompts for the highlights. I want that! I would think that such a task would require some solid training data which I do not have. What I do have is my own set of about 700 instructions ranging from "write an SD(XL) prompt where something, something, something dark side" through "Convert this image prompt from SD to SDXL"* to "Inspiration: crocs."
62
+
63
+ The small size of the model, the diverse open datasets used in training, and the large context size could be great for RAG applications, but coding with feedback is also a part of at least 2 layers. It favours Python though.
64
+
65
+
66
+
67
+ ### Direct Use
68
+
69
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
70
+ Here is what I can do with Stable Diffusion text prompts:
71
+
72
+ - Make SD image prompts by asking it nicely
73
+ - Transform those from SD to SDXL and back
74
+ - Improve prompts by removed legacy tags
75
+ - Inspire from only a single word
76
+ - TODO: Story/Lyric to image prompt
77
+ - TODO: Reverse image prompt (for further dataset development reasons)
78
+
79
+ ### Downstream Use [optional]
80
+
81
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
82
+
83
+ [More Information Needed]
84
+
85
+ ### Out-of-Scope Use
86
+
87
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
88
+
89
+ ```
90
+ You agree not to use the Model or Derivatives of the Model:
91
+
92
+ - In any way that violates any applicable national or international law or regulation or infringes upon the lawful rights and interests of any third party;
93
+ - For military use in any way;
94
+ - For the purpose of exploiting, harming or attempting to exploit or harm minors in any way;
95
+ - To generate or disseminate verifiably false information and/or content with the purpose of harming others;
96
+ - To generate or disseminate inappropriate content subject to applicable regulatory requirements;
97
+ - To generate or disseminate personal identifiable information without due authorization or for unreasonable use;
98
+ - To defame, disparage or otherwise harass others;
99
+ - For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation;
100
+ - For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics;
101
+ - To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;
102
+ - For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories.
103
+ ```
104
+
105
+
106
+ ## Bias, Risks, and Limitations
107
+
108
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
109
+
110
+ [More Information Needed]
111
+
112
+ ### Recommendations
113
+
114
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
115
+
116
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
117
+
118
+ ## How to Get Started with the Model
119
+
120
+ Use the code below to get started with the model.
121
+
122
+ [More Information Needed]
123
+
124
+ ## Training Details
125
+
126
+ ### Training Data
127
+
128
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
129
+ The datasets I've used to train this model are diverse, sandwiched as the middle and last layer when expanding. They are the following:
130
+
131
+
132
+ - HuggingFaceTB/cosmopedia-100k (textbooks and stories)
133
+ - LDJnr/Capybara
134
+ - vicgalle/alpaca-gpt4
135
+
136
+ - WhiteRabbitNeo/WRN-Chapter-1
137
+ - WhiteRabbitNeo/WRN-Chapter-2
138
+ -
139
+ - HuggingFaceH4/OpenHermes-2.5-1k-longest
140
+ - jondurbin/airoboros-3.2
141
+ - euclaise/WritingPrompts_curated (heavily filtered for sub/user mentions and a minimum of 400 upvotes - 6k)
142
+ - derek-thomas/squad-v1.1-t5-question-generation
143
+ - reinforz/question_generation_data
144
+ - teknium/GPTeacher-General-Instruct
145
+ - dim/roleplay_instruct_v2_final
146
+ - TIGER-Lab/MathInstruct
147
+ - m-a-p/Code-Feedback
148
+ - m-a-p/CodeFeedback-Filtered-Instruction
149
+ - glaiveai/glaive-code-assistant-v2
150
+
151
+
152
+ ### Training Procedure
153
+
154
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
155
+
156
+ #### Preprocessing [optional]
157
+
158
+ [More Information Needed]
159
+
160
+
161
+ #### Training Hyperparameters
162
+
163
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
164
+
165
+ #### Speeds, Sizes, Times [optional]
166
+
167
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
168
+
169
+ [More Information Needed]
170
+
171
+ ## Evaluation
172
+
173
+ <!-- This section describes the evaluation protocols and provides the results. -->
174
+
175
+ ### Testing Data, Factors & Metrics
176
+
177
+ #### Testing Data
178
+
179
+ <!-- This should link to a Dataset Card if possible. -->
180
+
181
+ [More Information Needed]
182
+
183
+ #### Factors
184
+
185
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
186
+
187
+ [More Information Needed]
188
+
189
+ #### Metrics
190
+
191
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
192
+
193
+ [More Information Needed]
194
+
195
+ ### Results
196
+
197
+ [More Information Needed]
198
+
199
+ #### Summary
200
+
201
+
202
+
203
+ ## Model Examination [optional]
204
+
205
+ <!-- Relevant interpretability work for the model goes here -->
206
+
207
+ [More Information Needed]
208
+
209
+ ## Environmental Impact
210
+
211
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
212
+
213
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
214
+
215
+ - **Hardware Type:** [More Information Needed]
216
+ - **Hours used:** [More Information Needed]
217
+ - **Cloud Provider:** [More Information Needed]
218
+ - **Compute Region:** [More Information Needed]
219
+ - **Carbon Emitted:** [More Information Needed]
220
+
221
+ ## Technical Specifications [optional]
222
+
223
+ ### Model Architecture and Objective
224
+
225
+ [More Information Needed]
226
+
227
+ ### Compute Infrastructure
228
+
229
+ [More Information Needed]
230
+
231
+ #### Hardware
232
+
233
+ [More Information Needed]
234
+
235
+ #### Software
236
+
237
+ [More Information Needed]
238
+
239
+ ## Citation [optional]
240
+
241
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
242
+
243
+ **BibTeX:**
244
+
245
+ [More Information Needed]
246
+
247
+ **APA:**
248
+
249
+ [More Information Needed]
250
+
251
+ ## Glossary [optional]
252
+
253
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
254
+
255
+ [More Information Needed]
256
+
257
+ ## More Information [optional]
258
+
259
+ [More Information Needed]
260
+
261
+ ## Model Card Authors [optional]
262
+
263
+ [More Information Needed]
264
+
265
+ ## Model Card Contact
266
+
267
+ [More Information Needed]