MaziyarPanahi commited on
Commit
c3c44ed
·
verified ·
1 Parent(s): 110f558

Upload folder using huggingface_hub (#1)

Browse files

- d0a89e20a820070b8c63d98ee333857e57fd2d04eb89a0abd07fe12d6c00d503 (d6fc87f34e9ec841809f9ddf45a62b8efc1d3994)
- 7d0c0a897ba28999c2b1ae0aee0f8a84244e311e84fae45327569e2859752239 (1447e25944fa517aa7254fbb205bda3cd7574cec)
- 2edb6884c2904e82904269865c6d376372a31b66b3be935429cc39a6e8ae77d7 (f5d21ce1c8a688536fe0299090a7a4dfe45391ec)
- eac3a12bc0bc87e98e483036d1043594d89cfecec650a766caf35a18669a0376 (9cb404bdead00d7baf910ddaa746ad256143c291)
- 3a484ec83e5b49dd217dd4336e1b7d017da1c51cd04a13c21c2cb6f7ccb90133 (3373ee018b2db1c12ff65e7490ca63ad90d14f6e)
- 2f56ac58623b5efec03404f113ddc09e7c27f81838dab8a2c456442cf3a0236f (ad57b1855df016f0bff40e2c290b29fd864b4d6e)

.gitattributes CHANGED
@@ -33,3 +33,9 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Mistral-Nemo-Kurdish-Instruct.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Mistral-Nemo-Kurdish-Instruct.Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Mistral-Nemo-Kurdish-Instruct.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Mistral-Nemo-Kurdish-Instruct.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Mistral-Nemo-Kurdish-Instruct.fp16.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Mistral-Nemo-Kurdish-Instruct-GGUF_imatrix.dat filter=lfs diff=lfs merge=lfs -text
Mistral-Nemo-Kurdish-Instruct-GGUF_imatrix.dat ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e800e8f5a93e7d2a0d1209f3b4fe8b9eedc6e743d9f57b642f71161a4375d665
3
+ size 7054394
Mistral-Nemo-Kurdish-Instruct.Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d500389f8e9626665db3628c750065570e8c7bb35a32e55c02b95116f3ef207
3
+ size 8727635200
Mistral-Nemo-Kurdish-Instruct.Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bbb5e2a0fb7c1d5796f4435fe09da04d695f8c78a3d027fb02c55c06e0247e42
3
+ size 8518739200
Mistral-Nemo-Kurdish-Instruct.Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e8c646a5bbc54c4f05773b6c542a963c4cb86d1a20e9c20931bc7c03bec43cfd
3
+ size 10056213760
Mistral-Nemo-Kurdish-Instruct.Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3381a0aa097fa1944a55536774e0fa0878346c9c6eff0a787907897204bc0495
3
+ size 13022373120
Mistral-Nemo-Kurdish-Instruct.fp16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:494c55783dc934f60f3859b32d670a28593a6e3d0d37d4f43dcb5fac430cffda
3
+ size 24504280064
README.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - quantized
4
+ - 2-bit
5
+ - 3-bit
6
+ - 4-bit
7
+ - 5-bit
8
+ - 6-bit
9
+ - 8-bit
10
+ - GGUF
11
+ - text-generation
12
+ - text-generation
13
+ model_name: Mistral-Nemo-Kurdish-Instruct-GGUF
14
+ base_model: nazimali/Mistral-Nemo-Kurdish-Instruct
15
+ inference: false
16
+ model_creator: nazimali
17
+ pipeline_tag: text-generation
18
+ quantized_by: MaziyarPanahi
19
+ ---
20
+ # [MaziyarPanahi/Mistral-Nemo-Kurdish-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-Nemo-Kurdish-Instruct-GGUF)
21
+ - Model creator: [nazimali](https://huggingface.co/nazimali)
22
+ - Original model: [nazimali/Mistral-Nemo-Kurdish-Instruct](https://huggingface.co/nazimali/Mistral-Nemo-Kurdish-Instruct)
23
+
24
+ ## Description
25
+ [MaziyarPanahi/Mistral-Nemo-Kurdish-Instruct-GGUF](https://huggingface.co/MaziyarPanahi/Mistral-Nemo-Kurdish-Instruct-GGUF) contains GGUF format model files for [nazimali/Mistral-Nemo-Kurdish-Instruct](https://huggingface.co/nazimali/Mistral-Nemo-Kurdish-Instruct).
26
+
27
+ ### About GGUF
28
+
29
+ GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
30
+
31
+ Here is an incomplete list of clients and libraries that are known to support GGUF:
32
+
33
+ * [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
34
+ * [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
35
+ * [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
36
+ * [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
37
+ * [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
38
+ * [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
39
+ * [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
40
+ * [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
41
+ * [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
42
+ * [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
43
+
44
+ ## Special thanks
45
+
46
+ 🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible.