Victor Major
vmajor
AI & ML interests
Application of ML and AI to classification of physical world inputs, in particular the ability of models to generalize to unseen real world data and independently categorise new observations.
Recent Activity
new activity
about 19 hours ago
unsloth/DeepSeek-R1-GGUF:Are the Q4 and Q5 models R1 or R1-Zero
new activity
1 day ago
unsloth/DeepSeek-V3-GGUF:Advice on running llama-server with Q2_K_L quant
new activity
13 days ago
unsloth/DeepSeek-V3-GGUF:llama.cpp cannot load Q6_K model
Organizations
None yet
vmajor's activity
Are the Q4 and Q5 models R1 or R1-Zero
5
#2 opened 1 day ago
by
gng2info
Advice on running llama-server with Q2_K_L quant
3
#6 opened 13 days ago
by
vmajor
llama.cpp cannot load Q6_K model
5
#3 opened 14 days ago
by
vmajor
How do I make the model output JSON?
6
#14 opened 2 months ago
by
vmajor
add merge tag
#1 opened about 1 year ago
by
davanstrien
add merge tag
#1 opened about 1 year ago
by
davanstrien
Benchmark pipeline broken?
1
#418 opened about 1 year ago
by
vmajor
Pytorch format available?
3
#7 opened about 1 year ago
by
vmajor
Thank you
23
#9 opened about 1 year ago
by
ehartford
Question about being able to load the model
1
#2 opened about 1 year ago
by
vmajor
Ability to generalise
6
#1 opened over 1 year ago
by
vmajor
Benchmarks or quality/context length charts?
#1 opened over 1 year ago
by
vmajor
Is there a way to provide instruction?
#7 opened over 1 year ago
by
vmajor
fp16 version
4
#2 opened over 1 year ago
by
vmajor
Is it possible to run this model on the CPU?
1
#20 opened over 1 year ago
by
vmajor
COVID-19?
2
#1 opened over 1 year ago
by
vmajor
Still referring to COVID-19?
2
#1 opened over 1 year ago
by
vmajor
stable-vicuna-13B-GPTQ-4bit.compat.no-act-order.safetensors not compatible with "standard" settings
25
#1 opened over 1 year ago
by
vmajor
Loading and interacting with Stable-vicuna-13B-GPTQ through python without webui
22
#6 opened over 1 year ago
by
AbdouS