I just added my hardware to the hf profile and I had to find out that I'm offically GPU-poor according to hf. The worst part is that after I found out that I'm GPU-poor I noticed that the 3050 Laptop GPU is listed only as the 6GB Version so I'm even worse with my 4GB version of the 3050 Laptop. Back when I bought the Laptop I didn't know anything about GPUs so I just thought any recent GPU would be good enough after using Intel's integrated graphics of the i-5 8250u CPU for almost 5 years. I think Nvidia could really show a heart for people like me who suffer the consequences of their VRAM-stinginess especially now considering they got AI-rich. It seems like 'give them as little VRAM as we possibly can' is like their unofficial company policy or something, at least I don't see how else you could justify downgrading the 4060 to 8GB from 12GB in the 3060. If the rumors are true and the 5060 is also only getting 8GB it's getting ridiculous that they had 50% more VRAM 2 gens ago already. Maybe handing out some free GPU's could be a good idea to make up for that and generate some positive publicity
Chris
WesPro
AI & ML interests
None yet
Recent Activity
View all activity
Organizations
None yet
WesPro's activity
Good underappreciated model. Using Q4_K_L quant with great results.
6
#3 opened 4 days ago
by
Nazosan
replied to
nroggendorff's
post
2 days ago
repetitive
3
#9 opened 12 days ago
by
Utochi
German example translation
#1 opened 21 days ago
by
WesPro
Where is the model from?
1
#1 opened 26 days ago
by
ethanc8
Cannot use in SillyTavern
9
#1 opened about 1 month ago
by
isr431
Best Mistral-Nemo 12B Finetune
4
#1 opened about 1 month ago
by
WesPro
question
9
#1 opened 3 months ago
by
WesPro
Question
2
#1 opened 2 months ago
by
WesPro
Eidolon-v2.1-14B
13
#379 opened 3 months ago
by
WesPro
Impressive as always. Qwen2.5 32B?
6
#1 opened 3 months ago
by
MRGRD56
ValueError: Can not map tensor 'model.layers.0.mlp.down_proj.weight.absmax'
5
#1 opened 6 months ago
by
kowal66b