Spaces:
Running
Running
Hugging Chat is lagging with model like qwen-coder
#624
by
rishadsojon
- opened
Whenever i use the model qwen-coder, obviously for coding related problem, it makes juice out of my cpu, starts to lag after some time when generating large reply and eventually freezes the whole page. I understand that a pentium g3240 isn't a decent performer but good enough to handle openai's chatgpt, and most importantly it can handle llama too, so i think it is the problem with the model itself or needs more processing power. And would be glad if devs fix extensive resource usage issue.