Error upon every generation even with examples

#2
by nextgen5 - opened

Whats the solution?
image.png

Thank you for your attention! Due to the duration limitations of ZeroGPU, the running time may exceed the allocated GPU duration. If you'd like to give it a try, you can duplicate the demo and assign a paid GPU for extended use. Sorry for the inconvenience.

SherryX pinned discussion

its not that, it errors after 13 seconds

Yes even when i trim the video to one sec and set upscale 1, it fails. I get an internal error.

mine has been erroring every day like this

Logs:

Attempted to select a non-interactive or hidden tab.
Attempted to select a non-interactive or hidden tab.
2025-01-19 01:50:22,976 - video_to_video - INFO - checkpoint_path: ./pretrained_weight
WARNING:root:Pretrained weights (/home/test/Workspace/yhliu/VSR/ours/checkpoints/open_clip_pytorch_model.bin) not found for model ViT-H-14.Available pretrained tags (['laion2b_s32b_b79k'].
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/gradio/queueing.py", line 622, in process_events
response = await route_utils.call_process_api(
File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/gradio/route_utils.py", line 323, in call_process_api
output = await app.get_blocks().process_api(
File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/gradio/blocks.py", line 2016, in process_api
result = await self.call_function(
File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/gradio/blocks.py", line 1569, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread
return await future
File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 962, in run
result = context.run(func, *args)
File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/gradio/utils.py", line 846, in wrapper
response = f(*args, **kwargs)
File "/home/user/app/app.py", line 17, in enhance_with_gpu
star = STAR_sr(
File "/home/user/app/video_super_resolution/scripts/inference_sr.py", line 41, in init
self.model = VideoToVideo_sr(model_cfg)
File "/home/user/app/video_to_video/video_to_video_model.py", line 40, in init
text_encoder = FrozenOpenCLIPEmbedder(device=self.device, pretrained="/home/test/Workspace/yhliu/VSR/ours/checkpoints/open_clip_pytorch_model.bin")
File "/home/user/app/video_to_video/modules/embedder.py", line 27, in init
model, _, _ = open_clip.create_model_and_transforms(arch, device=torch.device('cpu'), pretrained=pretrained)
File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/open_clip/factory.py", line 308, in create_model_and_transforms
model = create_model(
File "/home/user/.pyenv/versions/3.10.16/lib/python3.10/site-packages/open_clip/factory.py", line 234, in create_model
raise RuntimeError(error_str)
RuntimeError: Pretrained weights (/home/test/Workspace/yhliu/VSR/ours/checkpoints/open_clip_pytorch_model.bin) not found for model ViT-H-14.Available pretrained tags (['laion2b_s32b_b79k'].

Hi @Rattata @filmenor @quickFast @coolkisdmx ,
Thank you for your patience! We’ve identified the problem and fixed the code. Since the inference time will exceed the ZeroGPU limitation, you could try to duplicate the updated version running with your paid GPU, and let us know if everything works as expected.

Sorry for any inconvenience caused by the earlier issue. If you encounter any further problems or have additional suggestions, please share them with us.

Unfortunately it still doesn't work even with paid GPU. I've tried many tiers of the paid GPU's, still getting error. Is there one you would suggest?

Unfortunately it still doesn't work even with paid GPU. I've tried many tiers of the paid GPU's, still getting error. Is there one you would suggest?

I've encountered this issue as well, and I found that it depends on the VRAM configuration you're using. In my case, it worked successfully with a setup of 1x L40s and 1x L4 GPU. To improve inference speed, you'll need to modify the CUDA processing settings in both inference_sr.py and video_to_video_model.py files to enable parallel computing. I'm currently working on these optimizations and will share the specific changes I've made soon.

Sign up or log in to comment