--- license: apache-2.0 license_name: a license_link: LICENSE datasets: - m-a-p/Code-Feedback - HuggingFaceTB/cosmopedia-100k - LDJnr/Capybara - vicgalle/alpaca-gpt4 - glaiveai/glaive-code-assistant-v2 - WhiteRabbitNeo/WRN-Chapter-1 - WhiteRabbitNeo/WRN-Chapter-2 - m-a-p/CodeFeedback-Filtered-Instruction - HuggingFaceH4/OpenHermes-2.5-1k-longest - jondurbin/airoboros-3.2 - euclaise/WritingPrompts_curated - derek-thomas/squad-v1.1-t5-question-generation - reinforz/question_generation_data - teknium/GPTeacher-General-Instruct - dim/roleplay_instruct_v2_final - TIGER-Lab/MathInstruct - abacusai/SystemChat language: - en library_name: transformers tags: - code --- # Model Card for NinjaMouse-32l-danube A lanky version of [h2o-danube]'s tiny language model, stretched from 24 layers to 32. I have done this in steps, adding 2 new layers per step and training them on different datasets. ## Model Details ### Model Description One of the datasets I used to train this model was WhiteRabbitNeo [chapter 1](https://huggingface.co/datasets/WhiteRabbitNeo/WRN-Chapter-1) and [chapter 2](WhiteRabbitNeo/WRN-Chapter-2) thereby agreeing to their extented Apache 2 license. If you use this model, a derivative, you should read their terms as they are quite reasonable and could also be called the "Don't be a dick" clause (see out of scope section). With the important things covered, let us cover the model. I wanted a model that could construct Stable Diffusion prompts without the "trending on artstation, 8k uhd, dramatic lighting, detailed, masterpiece" spam. My solution got a little out of hand, when trying out *deep* block expansion and QLoRA on the TinyLlama model, which failed but lead to this. A natty 16k context window, can be trained using Unsloth, and seems to be a lot more coherent than both TinyLlama and Phi2. My thoughts going in to this was "If I use WRN in the training I get to call it something related to The Matrix" and "These Stable Diffusion prompt datasets need Geepus." - **Developed by:** Trolle Karlsson - **Model type:** Mistral - **Language(s) (NLP):** English - **License:** Apache-2.0 + WhiteRabbitNeo Extended Version - **Finetuned from model:** [h2o-danube](https://huggingface.co/h2oai/h2o-danube-1.8b-chat) ## Uses Imagine having a model going through an entire book, page by page, creating SDXL prompts for the highlights. I want that! I would think that such a task would require some solid training data which I do not have. What I do have is my own set of about 700 instructions ranging from "write an SD(XL) prompt where something, something, something dark side" through "Convert this image prompt from SD to SDXL"* to "Inspiration: crocs." The small size of the model, the diverse open datasets used in training, and the large context size could be great for RAG applications, but coding with feedback is also a part of at least 2 layers. It favours Python though. ### Direct Use Here is what I can do with Stable Diffusion text prompts: - Make SD image prompts by asking it nicely - Transform those from SD to SDXL and back - Improve prompts by removed legacy tags - Inspire from only a single word - TODO: Story/Lyric to image prompt - TODO: Reverse image prompt (for further dataset development reasons) ### Downstream Use [optional] [More Information Needed] ### Out-of-Scope Use ``` You agree not to use the Model or Derivatives of the Model: - In any way that violates any applicable national or international law or regulation or infringes upon the lawful rights and interests of any third party; - For military use in any way; - For the purpose of exploiting, harming or attempting to exploit or harm minors in any way; - To generate or disseminate verifiably false information and/or content with the purpose of harming others; - To generate or disseminate inappropriate content subject to applicable regulatory requirements; - To generate or disseminate personal identifiable information without due authorization or for unreasonable use; - To defame, disparage or otherwise harass others; - For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation; - For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics; - To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm; - For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories. ``` ## Bias, Risks, and Limitations [More Information Needed] ### Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data The datasets I've used to train this model are diverse, sandwiched as the middle and last layer when expanding. They are the following: - HuggingFaceTB/cosmopedia-100k (textbooks and stories) - LDJnr/Capybara - vicgalle/alpaca-gpt4 - WhiteRabbitNeo/WRN-Chapter-1 - WhiteRabbitNeo/WRN-Chapter-2 - - HuggingFaceH4/OpenHermes-2.5-1k-longest - jondurbin/airoboros-3.2 - euclaise/WritingPrompts_curated (heavily filtered for sub/user mentions and a minimum of 400 upvotes - 6k) - derek-thomas/squad-v1.1-t5-question-generation - reinforz/question_generation_data - teknium/GPTeacher-General-Instruct - dim/roleplay_instruct_v2_final - TIGER-Lab/MathInstruct - m-a-p/Code-Feedback - m-a-p/CodeFeedback-Filtered-Instruction - glaiveai/glaive-code-assistant-v2 ### Training Procedure #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] #### Speeds, Sizes, Times [optional] [More Information Needed] ## Evaluation ### Testing Data, Factors & Metrics #### Testing Data [More Information Needed] #### Factors [More Information Needed] #### Metrics [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] [More Information Needed] ## Environmental Impact Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]