aifeifei798's picture
Update README.md
41ff61a verified
|
raw
history blame
3.6 kB
---
license: llama3
language:
- en
- ja
- zh
tags:
- roleplay
- llama3
- sillytavern
- idol
---
# Special Thanks:
- Lewdiculous's superb gguf version, thank you for your conscientious and responsible dedication.
- https://huggingface.co/Lewdiculous/llama3-8B-DarkIdol-1.0-GGUF-IQ-Imatrix-Request
# Model Description:
The module combination has been readjusted to better fulfill various roles and has been adapted for mobile phones.
- DarkIdol:Roles that you can imagine and those that you cannot imagine.
- Roleplay
- Specialized in various role-playing scenarios
- more look at test role. (https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.0/resolve/main/DarkIdol_test_openai_api_lmstudio.py?download=true)
![image/png](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-1.0/resolve/main/2024-06-17_07-40-17_2841.png)
# Model Use
- Koboldcpp https://github.com/LostRuins/koboldcpp
- LM Studio https://lmstudio.ai/
- llama.cpp https://github.com/ggerganov/llama.cpp
- Meet Layla,Layla is an AI chatbot that runs offline on your device.No internet connection required.No censorship.Complete privacy.Layla Lite https://www.layla-network.ai/
- Layla Lite llama3-8B-DarkIdol-1.0-Q4_K_S-imat.gguf https://huggingface.co/Lewdiculous/llama3-8B-DarkIdol-1.0-GGUF-IQ-Imatrix-Request/blob/main/llama3-8B-DarkIdol-1.0-Q4_K_S-imat.gguf?download=true
- more gguf at https://huggingface.co/Lewdiculous/llama3-8B-DarkIdol-1.0-GGUF-IQ-Imatrix-Request
# character
- https://character-tavern.com/
- https://characterhub.org/
- https://pygmalion.chat/
- https://aetherroom.club/
### If you want to use vision functionality:
* You must use the latest versions of [Koboldcpp](https://github.com/LostRuins/koboldcpp).
### To use the multimodal capabilities of this model and use **vision** you need to load the specified **mmproj** file, this can be found inside this model repo. [Llava MMProj](https://huggingface.co/Nitral-AI/Llama-3-Update-3.0-mmproj-model-f16)
* You can load the **mmproj** by using the corresponding section in the interface:
![image/png](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F65d4cf2693a0a3744a27536c%2FUX6Ubss2EPNAT3SKGMLe0.png%3C%2Fspan%3E)%3C!-- HTML_TAG_END -->
### Thank you:
To the authors for their hard work, which has given me more options to easily create what I want. Thank you for your efforts.
- Hastagaras/Halu-8B-Llama3-Blackroot
- Gryphe/Pantheon-RP-1.0-8b-Llama-3
- cgato/L3-TheSpice-8b-v0.8.3
- ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B
- mergekit
- merge
- transformers
- llama
- .........
---
# llama3-8B-DarkIdol-1.0
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Hastagaras/Halu-8B-Llama3-Blackroot](https://huggingface.co/Hastagaras/Halu-8B-Llama3-Blackroot) as a base.
### Models Merged
The following models were included in the merge:
* [Gryphe/Pantheon-RP-1.0-8b-Llama-3](https://huggingface.co/Gryphe/Pantheon-RP-1.0-8b-Llama-3)
* [cgato/L3-TheSpice-8b-v0.8.3](https://huggingface.co/cgato/L3-TheSpice-8b-v0.8.3)
* [ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B](https://huggingface.co/ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Gryphe/Pantheon-RP-1.0-8b-Llama-3
- model: ChaoticNeutrals/Poppy_Porpoise-1.4-L3-8B
- model: cgato/L3-TheSpice-8b-v0.8.3
merge_method: model_stock
base_model: Hastagaras/Halu-8B-Llama3-Blackroot
dtype: bfloat16
```