Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
MaziyarPanahi
/
Q2.5-Veltha-14B-0.5-GGUF
like
0
Text Generation
GGUF
mistral
quantized
2-bit
3-bit
4-bit precision
5-bit
6-bit
8-bit precision
GGUF
conversational
Model card
Files
Files and versions
Community
1
Train
Use this model
refs/pr/1
Q2.5-Veltha-14B-0.5-GGUF
1 contributor
History:
7 commits
MaziyarPanahi
c2d4af407405763b47cb48cfac1f73753bedc692d40aa5b01707b4ef55404512
0d7a6cf
verified
26 days ago
.gitattributes
Safe
1.93 kB
c2d4af407405763b47cb48cfac1f73753bedc692d40aa5b01707b4ef55404512
26 days ago
Q2.5-Veltha-14B-0.5-GGUF_imatrix.dat
8.56 MB
LFS
c2d4af407405763b47cb48cfac1f73753bedc692d40aa5b01707b4ef55404512
26 days ago
Q2.5-Veltha-14B-0.5.Q5_K_M.gguf
Safe
10.5 GB
LFS
c01f6bc6d5a154fff384e4c34ecc6b415bd9c3bef1f594af2456b76fb6ce5f18
26 days ago
Q2.5-Veltha-14B-0.5.Q5_K_S.gguf
Safe
10.3 GB
LFS
59d8e1bd37a070300fa09f8fba2d13487c0c1b9a695bfccb20617dedd493f4e9
26 days ago
Q2.5-Veltha-14B-0.5.Q6_K.gguf
Safe
12.1 GB
LFS
39b7a92e7c83989a6f2058574fdadb6969ffd50570bf37bdb06599eedccb90d5
26 days ago
Q2.5-Veltha-14B-0.5.Q8_0.gguf
Safe
15.7 GB
LFS
903dbba8eb782fb26f8eb42f676f3e1a6a2201005e1350222fedd28fa54ab711
26 days ago
Q2.5-Veltha-14B-0.5.fp16.gguf
Safe
29.5 GB
LFS
7d36fe6b6e6a94ef3f2465f40bdf962dfac893d5c6be9facf1d26fec1ce26818
26 days ago
README.md
Safe
2.93 kB
c2d4af407405763b47cb48cfac1f73753bedc692d40aa5b01707b4ef55404512
26 days ago