--- base_model: ChaoticNeutrals/Eris_Remix_DPO_7B library_name: transformers tags: - quantized - 4-bit - AWQ - transformers - pytorch - mistral - text-generation - conversational - autotrain_compatible - endpoints_compatible - text-generation-inference - chatml license: other language: - en model_creator: ChaoticNeutrals model_name: Eris_Remix_7B model_type: mistral pipeline_tag: text-generation inference: false prompt_template: '<|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ' quantized_by: Suparious --- # ChaoticNeutrals/Eris-Remix-7B-DPO AWQ - Model creator: [ChaoticNeutrals](https://huggingface.co/ChaoticNeutrals) - Original model: [Eris-Remix-7B-DPO](https://huggingface.co/ChaoticNeutrals/Eris_Remix_DPO_7B) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/Jcg-4l6zVlPHVKOoxjmkG.png) ## Model Summary Jeitral: "Eris, the Greek goddess of chaos and discord." Notes: Model should be excellent for both RP/Chat related tasks. Seems to be working in both Alpaca/Chatml. Collaborative effort from both @Jeiku and @Nitral involving what we currently felt were our best individual projects. We hope you enjoy! - The Chaotic Neutrals. Remix with DPO: https://huggingface.co/datasets/athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW Trained for 200 steps/ 1 epoch Base model used: https://huggingface.co/ChaoticNeutrals/Eris_Remix_7B ## How to use ### Install the necessary packages ```bash pip install --upgrade autoawq autoawq-kernels ``` ### Example Python code ```python from awq import AutoAWQForCausalLM from transformers import AutoTokenizer, TextStreamer model_path = "solidrust/Eris-Remix-7B-DPO-AWQ" system_message = "You are Dolphin, a helpful AI assistant." # Load model model = AutoAWQForCausalLM.from_quantized(model_path, fuse_layers=True) tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True) streamer = TextStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True) # Convert prompt to tokens prompt_template = """\ <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant""" prompt = "You're standing on the surface of the Earth. "\ "You walk one mile south, one mile west and one mile north. "\ "You end up exactly where you started. Where are you?" tokens = tokenizer(prompt_template.format(system_message=system_message,prompt=prompt), return_tensors='pt').input_ids.cuda() # Generate output generation_output = model.generate(tokens, streamer=streamer, max_new_tokens=512) ``` ### About AWQ AWQ is an efficient, accurate and blazing-fast low-bit weight quantization method, currently supporting 4-bit quantization. Compared to GPTQ, it offers faster Transformers-based inference with equivalent or better quality compared to the most commonly used GPTQ settings. AWQ models are currently supported on Linux and Windows, with NVidia GPUs only. macOS users: please use GGUF models instead. It is supported by: - [Text Generation Webui](https://github.com/oobabooga/text-generation-webui) - using Loader: AutoAWQ - [vLLM](https://github.com/vllm-project/vllm) - version 0.2.2 or later for support for all model types. - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) - [Transformers](https://huggingface.co/docs/transformers) version 4.35.0 and later, from any code or client that supports Transformers - [AutoAWQ](https://github.com/casper-hansen/AutoAWQ) - for use from Python code ## Prompt template: ChatML ```plaintext <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|> <|im_start|>assistant ```