Update README.md
Browse files
README.md
CHANGED
@@ -24,11 +24,12 @@ Made with Exllamav2 0.1.3 with the default dataset.
|
|
24 |
This model is meant to be used with Exllamav2 loader that requires the model to be fully loaded into GPU VRAM.
|
25 |
It primarily requires a Nvidia RTX card on Windows/Linux or AMD card on Linux.
|
26 |
If you want to use this model but your system doesn't meet these requirements, you should look for GGUF versions of the model.
|
27 |
-
It can be used with
|
28 |
[Text Generation Webui](https://github.com/oobabooga/text-generation-webui)
|
29 |
[KoboldAI](https://github.com/henk717/KoboldAI)
|
30 |
[ExUI](https://github.com/turboderp/exui)
|
31 |
-
[lollms-webui](https://github.com/ParisNeo/lollms-webui)
|
|
|
32 |
# Original model card
|
33 |
|
34 |
# AlchemistCoder: Harmonizing and Eliciting Code Capability by Hindsight Tuning on Multi-source Data
|
|
|
24 |
This model is meant to be used with Exllamav2 loader that requires the model to be fully loaded into GPU VRAM.
|
25 |
It primarily requires a Nvidia RTX card on Windows/Linux or AMD card on Linux.
|
26 |
If you want to use this model but your system doesn't meet these requirements, you should look for GGUF versions of the model.
|
27 |
+
It can be used with apps like:
|
28 |
[Text Generation Webui](https://github.com/oobabooga/text-generation-webui)
|
29 |
[KoboldAI](https://github.com/henk717/KoboldAI)
|
30 |
[ExUI](https://github.com/turboderp/exui)
|
31 |
+
[lollms-webui](https://github.com/ParisNeo/lollms-webui)
|
32 |
+
|
33 |
# Original model card
|
34 |
|
35 |
# AlchemistCoder: Harmonizing and Eliciting Code Capability by Hindsight Tuning on Multi-source Data
|