GGUF
TensorBlock
GGUF
Inference Endpoints
morriszms commited on
Commit
7fc7ab3
·
verified ·
1 Parent(s): 1af674c

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,15 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ airophin-13b-pntk-16k-fp16-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
+ airophin-13b-pntk-16k-fp16-Q3_K_L.gguf filter=lfs diff=lfs merge=lfs -text
38
+ airophin-13b-pntk-16k-fp16-Q3_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
+ airophin-13b-pntk-16k-fp16-Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ airophin-13b-pntk-16k-fp16-Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
41
+ airophin-13b-pntk-16k-fp16-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
42
+ airophin-13b-pntk-16k-fp16-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
43
+ airophin-13b-pntk-16k-fp16-Q5_0.gguf filter=lfs diff=lfs merge=lfs -text
44
+ airophin-13b-pntk-16k-fp16-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
45
+ airophin-13b-pntk-16k-fp16-Q5_K_S.gguf filter=lfs diff=lfs merge=lfs -text
46
+ airophin-13b-pntk-16k-fp16-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
47
+ airophin-13b-pntk-16k-fp16-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - jondurbin/airoboros-gpt4-1.4.1
4
+ - ehartford/dolphin
5
+ base_model: bhenrym14/airophin-13b-pntk-16k-fp16
6
+ tags:
7
+ - TensorBlock
8
+ - GGUF
9
+ ---
10
+
11
+ <div style="width: auto; margin-left: auto; margin-right: auto">
12
+ <img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
13
+ </div>
14
+ <div style="display: flex; justify-content: space-between; width: 100%;">
15
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
16
+ <p style="margin-top: 0.5em; margin-bottom: 0em;">
17
+ Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
18
+ </p>
19
+ </div>
20
+ </div>
21
+
22
+ ## bhenrym14/airophin-13b-pntk-16k-fp16 - GGUF
23
+
24
+ This repo contains GGUF format model files for [bhenrym14/airophin-13b-pntk-16k-fp16](https://huggingface.co/bhenrym14/airophin-13b-pntk-16k-fp16).
25
+
26
+ The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
27
+
28
+ <div style="text-align: left; margin: 20px 0;">
29
+ <a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
30
+ Run them on the TensorBlock client using your local machine ↗
31
+ </a>
32
+ </div>
33
+
34
+ ## Prompt template
35
+
36
+ ```
37
+
38
+ ```
39
+
40
+ ## Model file specification
41
+
42
+ | Filename | Quant type | File Size | Description |
43
+ | -------- | ---------- | --------- | ----------- |
44
+ | [airophin-13b-pntk-16k-fp16-Q2_K.gguf](https://huggingface.co/tensorblock/airophin-13b-pntk-16k-fp16-GGUF/blob/main/airophin-13b-pntk-16k-fp16-Q2_K.gguf) | Q2_K | 4.521 GB | smallest, significant quality loss - not recommended for most purposes |
45
+ | [airophin-13b-pntk-16k-fp16-Q3_K_S.gguf](https://huggingface.co/tensorblock/airophin-13b-pntk-16k-fp16-GGUF/blob/main/airophin-13b-pntk-16k-fp16-Q3_K_S.gguf) | Q3_K_S | 5.270 GB | very small, high quality loss |
46
+ | [airophin-13b-pntk-16k-fp16-Q3_K_M.gguf](https://huggingface.co/tensorblock/airophin-13b-pntk-16k-fp16-GGUF/blob/main/airophin-13b-pntk-16k-fp16-Q3_K_M.gguf) | Q3_K_M | 5.903 GB | very small, high quality loss |
47
+ | [airophin-13b-pntk-16k-fp16-Q3_K_L.gguf](https://huggingface.co/tensorblock/airophin-13b-pntk-16k-fp16-GGUF/blob/main/airophin-13b-pntk-16k-fp16-Q3_K_L.gguf) | Q3_K_L | 6.454 GB | small, substantial quality loss |
48
+ | [airophin-13b-pntk-16k-fp16-Q4_0.gguf](https://huggingface.co/tensorblock/airophin-13b-pntk-16k-fp16-GGUF/blob/main/airophin-13b-pntk-16k-fp16-Q4_0.gguf) | Q4_0 | 6.860 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
49
+ | [airophin-13b-pntk-16k-fp16-Q4_K_S.gguf](https://huggingface.co/tensorblock/airophin-13b-pntk-16k-fp16-GGUF/blob/main/airophin-13b-pntk-16k-fp16-Q4_K_S.gguf) | Q4_K_S | 6.913 GB | small, greater quality loss |
50
+ | [airophin-13b-pntk-16k-fp16-Q4_K_M.gguf](https://huggingface.co/tensorblock/airophin-13b-pntk-16k-fp16-GGUF/blob/main/airophin-13b-pntk-16k-fp16-Q4_K_M.gguf) | Q4_K_M | 7.326 GB | medium, balanced quality - recommended |
51
+ | [airophin-13b-pntk-16k-fp16-Q5_0.gguf](https://huggingface.co/tensorblock/airophin-13b-pntk-16k-fp16-GGUF/blob/main/airophin-13b-pntk-16k-fp16-Q5_0.gguf) | Q5_0 | 8.356 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
52
+ | [airophin-13b-pntk-16k-fp16-Q5_K_S.gguf](https://huggingface.co/tensorblock/airophin-13b-pntk-16k-fp16-GGUF/blob/main/airophin-13b-pntk-16k-fp16-Q5_K_S.gguf) | Q5_K_S | 8.356 GB | large, low quality loss - recommended |
53
+ | [airophin-13b-pntk-16k-fp16-Q5_K_M.gguf](https://huggingface.co/tensorblock/airophin-13b-pntk-16k-fp16-GGUF/blob/main/airophin-13b-pntk-16k-fp16-Q5_K_M.gguf) | Q5_K_M | 8.596 GB | large, very low quality loss - recommended |
54
+ | [airophin-13b-pntk-16k-fp16-Q6_K.gguf](https://huggingface.co/tensorblock/airophin-13b-pntk-16k-fp16-GGUF/blob/main/airophin-13b-pntk-16k-fp16-Q6_K.gguf) | Q6_K | 9.946 GB | very large, extremely low quality loss |
55
+ | [airophin-13b-pntk-16k-fp16-Q8_0.gguf](https://huggingface.co/tensorblock/airophin-13b-pntk-16k-fp16-GGUF/blob/main/airophin-13b-pntk-16k-fp16-Q8_0.gguf) | Q8_0 | 12.881 GB | very large, extremely low quality loss - not recommended |
56
+
57
+
58
+ ## Downloading instruction
59
+
60
+ ### Command line
61
+
62
+ Firstly, install Huggingface Client
63
+
64
+ ```shell
65
+ pip install -U "huggingface_hub[cli]"
66
+ ```
67
+
68
+ Then, downoad the individual model file the a local directory
69
+
70
+ ```shell
71
+ huggingface-cli download tensorblock/airophin-13b-pntk-16k-fp16-GGUF --include "airophin-13b-pntk-16k-fp16-Q2_K.gguf" --local-dir MY_LOCAL_DIR
72
+ ```
73
+
74
+ If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
75
+
76
+ ```shell
77
+ huggingface-cli download tensorblock/airophin-13b-pntk-16k-fp16-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
78
+ ```
airophin-13b-pntk-16k-fp16-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f687783a76463a329f42c502c33b75edcbd626d6788e7e261d0fb5067f761eab
3
+ size 4854270336
airophin-13b-pntk-16k-fp16-Q3_K_L.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ffbcf9f153f6b5a112b610da586abf3349afcc21fa0d392da8e4dd5215eeae6c
3
+ size 6929559936
airophin-13b-pntk-16k-fp16-Q3_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:812e89644a49554e135204ebbd6f0395c9932aa18165ae2335178e1702931e8b
3
+ size 6337769856
airophin-13b-pntk-16k-fp16-Q3_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:766f052a1293d6c2f5714d084a43817c8ff756fa09d08b20818cd493a5e54544
3
+ size 5658980736
airophin-13b-pntk-16k-fp16-Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:48a2a5873b959a099dabd9dcca614676dbd4d3af92f252e12e76f8cb2a2a0ce1
3
+ size 7365835136
airophin-13b-pntk-16k-fp16-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:28c5a98da418dc5b5e549d8b2d21012374c420b21ec7132334c3df2c17e1747e
3
+ size 7865956736
airophin-13b-pntk-16k-fp16-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:45fdd83016f0fda38d28b76733afa64ffee96cb95015e706c2ee95d9c1326834
3
+ size 7423179136
airophin-13b-pntk-16k-fp16-Q5_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:08e3e9d6384580ba5f076def5f4f7d0ff604cba8bda94a2998cc6fdf117173c1
3
+ size 8972286336
airophin-13b-pntk-16k-fp16-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c08926890f8c4ff218de223198a5346f6623449049f5f39cdcb9424b49eb40cc
3
+ size 9229924736
airophin-13b-pntk-16k-fp16-Q5_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c28219d97c6546298a1c272d4a74c0b4b2dc661fe94e456b95babb8256307a8
3
+ size 8972286336
airophin-13b-pntk-16k-fp16-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:162b6b6e84e6aaf5fd00de0f6ff3ab9b80362c584203aa5c56a6d481d55146e7
3
+ size 10679140736
airophin-13b-pntk-16k-fp16-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e948dafdd00409804e68ef06cac433c5524d85888ff3392bfb6a3d5647271cab
3
+ size 13831319936