Update README.md
Browse files
README.md
CHANGED
@@ -14,99 +14,41 @@ license: apache-2.0
|
|
14 |
pipeline_tag: text-generation
|
15 |
quantized_by: bartowski
|
16 |
---
|
|
|
17 |
|
18 |
-
|
19 |
|
20 |
-
|
|
|
|
|
21 |
|
22 |
-
|
|
|
|
|
23 |
|
24 |
-
|
25 |
|
26 |
-
|
27 |
-
|
28 |
-
## Prompt format
|
29 |
|
30 |
```
|
31 |
<s>[INST] {prompt}[/INST] </s>
|
32 |
```
|
33 |
|
34 |
-
##
|
35 |
-
|
36 |
-
| Filename | Quant type | File Size | Split | Description |
|
37 |
-
| -------- | ---------- | --------- | ----- | ----------- |
|
38 |
-
| [Mistral-Nemo-Instruct-2407-f32.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-f32.gguf) | f32 | 49.00GB | false | Full F32 weights. |
|
39 |
-
| [Mistral-Nemo-Instruct-2407-Q8_0.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q8_0.gguf) | Q8_0 | 13.02GB | false | Extremely high quality, generally unneeded but max available quant. |
|
40 |
-
| [Mistral-Nemo-Instruct-2407-Q6_K_L.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q6_K_L.gguf) | Q6_K_L | 10.38GB | false | Uses Q8_0 for embed and output weights. Very high quality, near perfect, *recommended*. |
|
41 |
-
| [Mistral-Nemo-Instruct-2407-Q6_K.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q6_K.gguf) | Q6_K | 10.06GB | false | Very high quality, near perfect, *recommended*. |
|
42 |
-
| [Mistral-Nemo-Instruct-2407-Q5_K_L.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q5_K_L.gguf) | Q5_K_L | 9.14GB | false | Uses Q8_0 for embed and output weights. High quality, *recommended*. |
|
43 |
-
| [Mistral-Nemo-Instruct-2407-Q5_K_M.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q5_K_M.gguf) | Q5_K_M | 8.73GB | false | High quality, *recommended*. |
|
44 |
-
| [Mistral-Nemo-Instruct-2407-Q5_K_S.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q5_K_S.gguf) | Q5_K_S | 8.52GB | false | High quality, *recommended*. |
|
45 |
-
| [Mistral-Nemo-Instruct-2407-Q4_K_L.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q4_K_L.gguf) | Q4_K_L | 7.98GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
|
46 |
-
| [Mistral-Nemo-Instruct-2407-Q4_K_M.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q4_K_M.gguf) | Q4_K_M | 7.48GB | false | Good quality, default size for must use cases, *recommended*. |
|
47 |
-
| [Mistral-Nemo-Instruct-2407-Q3_K_XL.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q3_K_XL.gguf) | Q3_K_XL | 7.15GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
|
48 |
-
| [Mistral-Nemo-Instruct-2407-Q4_K_S.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q4_K_S.gguf) | Q4_K_S | 7.12GB | false | Slightly lower quality with more space savings, *recommended*. |
|
49 |
-
| [Mistral-Nemo-Instruct-2407-IQ4_XS.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-IQ4_XS.gguf) | IQ4_XS | 6.74GB | false | Decent quality, smaller than Q4_K_S with similar performance, *recommended*. |
|
50 |
-
| [Mistral-Nemo-Instruct-2407-Q3_K_L.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q3_K_L.gguf) | Q3_K_L | 6.56GB | false | Lower quality but usable, good for low RAM availability. |
|
51 |
-
| [Mistral-Nemo-Instruct-2407-Q3_K_M.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q3_K_M.gguf) | Q3_K_M | 6.08GB | false | Low quality. |
|
52 |
-
| [Mistral-Nemo-Instruct-2407-IQ3_M.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-IQ3_M.gguf) | IQ3_M | 5.72GB | false | Medium-low quality, new method with decent performance comparable to Q3_K_M. |
|
53 |
-
| [Mistral-Nemo-Instruct-2407-Q3_K_S.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q3_K_S.gguf) | Q3_K_S | 5.53GB | false | Low quality, not recommended. |
|
54 |
-
| [Mistral-Nemo-Instruct-2407-Q2_K_L.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q2_K_L.gguf) | Q2_K_L | 5.45GB | false | Uses Q8_0 for embed and output weights. Very low quality but surprisingly usable. |
|
55 |
-
| [Mistral-Nemo-Instruct-2407-IQ3_XS.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-IQ3_XS.gguf) | IQ3_XS | 5.31GB | false | Lower quality, new method with decent performance, slightly better than Q3_K_S. |
|
56 |
-
| [Mistral-Nemo-Instruct-2407-Q2_K.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-Q2_K.gguf) | Q2_K | 4.79GB | false | Very low quality but surprisingly usable. |
|
57 |
-
| [Mistral-Nemo-Instruct-2407-IQ2_M.gguf](https://huggingface.co/bartowski/Mistral-Nemo-Instruct-2407-GGUF/blob/main/Mistral-Nemo-Instruct-2407-IQ2_M.gguf) | IQ2_M | 4.44GB | false | Relatively low quality, uses SOTA techniques to be surprisingly usable. |
|
58 |
-
|
59 |
-
## Credits
|
60 |
-
|
61 |
-
Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset
|
62 |
-
|
63 |
-
Thank you ZeroWw for the inspiration to experiment with embed/output
|
64 |
-
|
65 |
-
## Downloading using huggingface-cli
|
66 |
-
|
67 |
-
First, make sure you have hugginface-cli installed:
|
68 |
-
|
69 |
-
```
|
70 |
-
pip install -U "huggingface_hub[cli]"
|
71 |
-
```
|
72 |
-
|
73 |
-
Then, you can target the specific file you want:
|
74 |
-
|
75 |
-
```
|
76 |
-
huggingface-cli download bartowski/Mistral-Nemo-Instruct-2407-GGUF --include "Mistral-Nemo-Instruct-2407-Q4_K_M.gguf" --local-dir ./
|
77 |
-
```
|
78 |
-
|
79 |
-
If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:
|
80 |
-
|
81 |
-
```
|
82 |
-
huggingface-cli download bartowski/Mistral-Nemo-Instruct-2407-GGUF --include "Mistral-Nemo-Instruct-2407-Q8_0.gguf/*" --local-dir Mistral-Nemo-Instruct-2407-Q8_0
|
83 |
-
```
|
84 |
-
|
85 |
-
You can either specify a new local-dir (Mistral-Nemo-Instruct-2407-Q8_0) or download them all in place (./)
|
86 |
-
|
87 |
-
## Which file should I choose?
|
88 |
-
|
89 |
-
A great write up with charts showing various performances is provided by Artefact2 [here](https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9)
|
90 |
-
|
91 |
-
The first thing to figure out is how big a model you can run. To do this, you'll need to figure out how much RAM and/or VRAM you have.
|
92 |
-
|
93 |
-
If you want your model running as FAST as possible, you'll want to fit the whole thing on your GPU's VRAM. Aim for a quant with a file size 1-2GB smaller than your GPU's total VRAM.
|
94 |
-
|
95 |
-
If you want the absolute maximum quality, add both your system RAM and your GPU's VRAM together, then similarly grab a quant with a file size 1-2GB Smaller than that total.
|
96 |
-
|
97 |
-
Next, you'll need to decide if you want to use an 'I-quant' or a 'K-quant'.
|
98 |
|
99 |
-
|
100 |
|
101 |
-
|
102 |
|
103 |
-
|
104 |
|
105 |
-
|
106 |
|
107 |
-
|
108 |
|
109 |
-
|
110 |
|
111 |
-
|
112 |
|
|
|
|
14 |
pipeline_tag: text-generation
|
15 |
quantized_by: bartowski
|
16 |
---
|
17 |
+
## 💫 Community Model> Mistral Nemo Instruct 2407 by Mistralai
|
18 |
|
19 |
+
*👾 [LM Studio](https://lmstudio.ai) Community models highlights program. Highlighting new & noteworthy models by the community. Join the conversation on [Discord](https://discord.gg/aPQfnNkxGC)*.
|
20 |
|
21 |
+
**Model creator:** [mistralai](https://huggingface.co/mistralai)<br>
|
22 |
+
**Original model**: [Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407)<br>
|
23 |
+
**GGUF quantization:** provided by [bartowski](https://huggingface.co/bartowski) based on `llama.cpp` release [b3441](https://github.com/ggerganov/llama.cpp/releases/tag/b3441)<br>
|
24 |
|
25 |
+
## Model Summary:
|
26 |
+
Mistral Nemo has a massive 1024000 (over 1 million) context window and supports dozens of languages including French, German, Spanish, Italian, Portuguese, Arabic, Hindi, Russian, Chinese, Japanese, and Korean, along with 80+ coding languages including Python, Java, C, C++, JavaScript, and Bash.<br>
|
27 |
+
Trained jointly by Mistral AI and NVIDIA, it significantly outperforms existing models smaller or similar in size.
|
28 |
|
29 |
+
## Prompt Template:
|
30 |
|
31 |
+
Choose the `Mistral Instruct` preset in your LM Studio.
|
32 |
+
Under the hood, the model will see a prompt that's formatted like so:
|
|
|
33 |
|
34 |
```
|
35 |
<s>[INST] {prompt}[/INST] </s>
|
36 |
```
|
37 |
|
38 |
+
## Technical Details
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
|
40 |
+
Mistral Nemo was trained up to 128k context, but supports extra with potentially reduced quality.
|
41 |
|
42 |
+
This model has amazing performance across a series of benchmarks including multilingual.
|
43 |
|
44 |
+
For more details, check the blog post here: https://mistral.ai/news/mistral-nemo/
|
45 |
|
46 |
+
## Special thanks
|
47 |
|
48 |
+
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.
|
49 |
|
50 |
+
🙏 Special thanks to [Kalomaze](https://github.com/kalomaze) and [Dampf](https://github.com/Dampfinchen) for their work on the dataset (linked [here](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8)) that was used for calculating the imatrix for all sizes.
|
51 |
|
52 |
+
## Disclaimers
|
53 |
|
54 |
+
LM Studio is not the creator, originator, or owner of any Model featured in the Community Model Program. Each Community Model is created and provided by third parties. LM Studio does not endorse, support, represent or guarantee the completeness, truthfulness, accuracy, or reliability of any Community Model. You understand that Community Models can produce content that might be offensive, harmful, inaccurate or otherwise inappropriate, or deceptive. Each Community Model is the sole responsibility of the person or entity who originated such Model. LM Studio may not monitor or control the Community Models and cannot, and does not, take responsibility for any such Model. LM Studio disclaims all warranties or guarantees about the accuracy, reliability or benefits of the Community Models. LM Studio further disclaims any warranty that the Community Model will meet your requirements, be secure, uninterrupted or available at any time or location, or error-free, viruses-free, or that any errors will be corrected, or otherwise. You will be solely responsible for any damage resulting from your use of or access to the Community Models, your downloading of any Community Model, or use of any other Community Model provided by or through LM Studio.
|