Steve Li
CHNtentes
AI & ML interests
None yet
Recent Activity
new activity
5 days ago
city96/HunyuanVideo-gguf:Can we use "temporal tiling support" with these gguf models?
new activity
8 days ago
deepseek-ai/DeepSeek-V3:minimum vram?
new activity
9 days ago
Qwen/QVQ-72B-Preview:GGUF weights?
Organizations
None yet
CHNtentes's activity
Can we use "temporal tiling support" with these gguf models?
1
#10 opened 5 days ago
by
CHNtentes
minimum vram?
8
#9 opened 8 days ago
by
CHNtentes
GGUF weights?
7
#1 opened 10 days ago
by
luijait
How was r7b?
6
#3 opened 20 days ago
by
MRU4913
transformers version?
1
#5 opened about 2 months ago
by
CHNtentes
Q4_0, Q4_1, Q5_0, Q5_1 can be dropped?
1
#1 opened about 2 months ago
by
CHNtentes
Where is 't5xxl.safetensors' ?
4
#12 opened 2 months ago
by
ajavamind
πΌπΌπΌ
2
#3 opened 2 months ago
by
clem
Hardware requirements
6
#10 opened 4 months ago
by
ZahirHamroune
T4 - bfloat 16 not support
10
#2 opened 4 months ago
by
SylvainV
π© Report: Spam
#150 opened 4 months ago
by
CHNtentes
Is it using ggml to compute?
1
#30 opened 4 months ago
by
CHNtentes
For the fastest inference on 12GB VRAM, are the following GGUF models appropriate to use?
3
#4 opened 4 months ago
by
ViratX
Inquiry on Minimum Configuration and Cost for Running Gemma-2-9B Model Efficiently
3
#39 opened 5 months ago
by
ltkien2003
Error in readme?
1
#6 opened 5 months ago
by
CHNtentes
Good work!
1
#1 opened 5 months ago
by
CHNtentes
Compared to the regular FP8 model, what is the better performance of the 8BIT model here
4
#16 opened 5 months ago
by
demo001s
Please explain the difference between the two models
3
#11 opened 5 months ago
by
martjay
k-quants possible?
5
#2 opened 5 months ago
by
CHNtentes
weight dtype "default" very slow
3
#44 opened 5 months ago
by
D3NN15