File size: 1,987 Bytes
b46eb35
 
 
 
 
 
 
 
 
 
4fe49bc
 
 
758b67c
8feeeba
 
4fe49bc
 
 
 
 
758b67c
 
4fe49bc
758b67c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8feeeba
4fe49bc
b46eb35
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
base_model:
- LeroyDyer/Mixtral_Chat_X_128k
- ChaoticNeutrals/Eris_PrimeV3-Vision-7B
library_name: transformers
tags:
- mergekit
- merge

---


## VISION+

```script

If you want to use vision functionality:

Make sure you are using the latest version of KoboldCpp.
To use the multimodal capabilities of this model, such as vision, you also need to load the specified mmproj file, you can get it here.

https://huggingface.co/LeroyDyer/Mixtral_AI_Vision_128k/blob/main/mmproj-model-f16.gguf

You can load the mmproj by using the corresponding section in the interface:


KoboldCpp now supports Vision via Multimodal Projectors (aka LLaVA), allowing it to perceive and react to images! Load a suitable --mmproj file or select it in the GUI launcher to use vision capabilities. (Not working on Vulkan)
Note: This is NOT limited to only LLaVA models, any compatible model of the same size and architecture can gain vision capabilities!
Simply grab a 200mb mmproj file for your architecture here,

 https://huggingface.co/koboldcpp/mmproj

load it with --mmproj and stick it into your favorite compatible model, and it will be able to see images as well!








```

# MODEL_NAME

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.

### Models Merged

The following models were included in the merge:
* [LeroyDyer/Mixtral_Chat_X_128k](https://huggingface.co/LeroyDyer/Mixtral_Chat_X_128k)
* [ChaoticNeutrals/Eris_PrimeV3-Vision-7B](https://huggingface.co/ChaoticNeutrals/Eris_PrimeV3-Vision-7B)

### Configuration

The following YAML configuration was used to produce this model:

```yaml

models:
  - model: LeroyDyer/Mixtral_Chat_X_128k
    parameters:
      weight: .78944
  - model: ChaoticNeutrals/Eris_PrimeV3-Vision-7B
    parameters:
      weight: 0.3453
merge_method: linear
dtype: float16

```