Twilight-Large-123B / README.md
softwareweaver's picture
Update README.md
ed373b8 verified
---
base_model:
- schnapper79/lumikabra-123B_v0.4
- mistralai/Mistral-Large-Instruct-2407
- TheDrummer/Behemoth-123B-v1
library_name: transformers
tags:
- mergekit
- merge
license: other
---
# Twilight-Large
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit) by @softwareweaver. Use the prompt format that Mistral Large uses.
## EXL2 Quants
https://huggingface.co/softwareweaver/Twilight-Large-123B-EXL2-5bpw
## GGUF Quants
https://huggingface.co/mradermacher/Twilight-Large-123B-GGUF
https://huggingface.co/mradermacher/Twilight-Large-123B-i1-GGUF
Use ***--chat-template llama2*** when using llama.cpp
## Control Vectors
You can use Control Vectors for Mistral Large
https://huggingface.co/jukofyork/creative-writing-control-vectors-v3.0/tree/main/Mistral-Large-Instruct-2407
Control vectors allow fine-tuned control over LLMs, enabling more precise/targeted text generation.
More info https://huggingface.co/jukofyork/creative-writing-control-vectors-v3.0
## Sample Generations
Some sample generations https://huggingface.co/softwareweaver/Twilight-Large-123B/discussions
Please add your own generations to the community tab. This allows others to evaluate the model outputs before downloading it.
## Merge Details
### Merge Method
This model was merged using the della_linear merge method using [mistralai/Mistral-Large-Instruct-2407](https://huggingface.co/mistralai/Mistral-Large-Instruct-2407) as a base.
### Models Merged
The following models were included in the merge:
* [schnapper79/lumikabra-123B_v0.4](https://huggingface.co/schnapper79/lumikabra-123B_v0.4)
* [TheDrummer/Behemoth-123B-v1](https://huggingface.co/TheDrummer/Behemoth-123B-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: TheDrummer/Behemoth-123B-v1
parameters:
weight: 0.25
density: 0.9
- model: schnapper79/lumikabra-123B_v0.4
parameters:
weight: 0.3
density: 0.9
merge_method: della_linear
base_model: mistralai/Mistral-Large-Instruct-2407
parameters:
epsilon: 0.05
lambda: 1
int8_mask: true
dtype: bfloat16
```