Core dumped when merging DeepSeek-V3-Q2_K_XS
#12 opened 2 days ago
by
MyJerry1996
First review, Q5-K-M require 502Gb RAM, better than Meta 405billions
5
#11 opened 9 days ago
by
krustik
Issue with --n-gpu-layers 5 Parameter: Model Only Running on CPU
11
#10 opened 10 days ago
by
vuk123
I’m new to GGUF quants
1
#9 opened 10 days ago
by
fsaudm
I loaded DeepSeek-V3-Q5_K_M up on my 10yrs old old Tesla M40 (Dell C4130)
3
#8 opened 12 days ago
by
gng2info
why use q5 for key cache?
1
#7 opened 12 days ago
by
CHNtentes
What is the required GPU size to run Is a 4090 possible and does it support ollama
9
#5 opened 13 days ago
by
sminbb
I'm a newbie. How to use?
1
#4 opened 13 days ago
by
huangkk
Getting error with Q3-K-M
7
#2 opened 13 days ago
by
alain401
Are these imatrix GGUF quants?
4
#1 opened 14 days ago
by
Kearm