runtime error
Exit code: 1. Reason: 0/1.61M [00:00<?, ?B/s][A vocab.json: 100%|ββββββββββ| 1.61M/1.61M [00:00<00:00, 27.7MB/s] merges.txt: 0%| | 0.00/917k [00:00<?, ?B/s][A merges.txt: 100%|ββββββββββ| 917k/917k [00:00<00:00, 45.0MB/s] tokenizer.json: 0%| | 0.00/4.25M [00:00<?, ?B/s][A tokenizer.json: 100%|ββββββββββ| 4.25M/4.25M [00:00<00:00, 19.3MB/s] added_tokens.json: 0%| | 0.00/2.50k [00:00<?, ?B/s][A added_tokens.json: 100%|ββββββββββ| 2.50k/2.50k [00:00<00:00, 18.0MB/s] special_tokens_map.json: 0%| | 0.00/99.0 [00:00<?, ?B/s][A special_tokens_map.json: 100%|ββββββββββ| 99.0/99.0 [00:00<00:00, 667kB/s] Using GPU: NVIDIA A100-SXM4-80GB MIG 3g.40gb /usr/local/lib/python3.10/site-packages/gradio/components/chatbot.py:255: UserWarning: The 'tuples' format for chatbot messages is deprecated and will be removed in a future version of Gradio. Please set type='messages' instead, which uses openai-style 'role' and 'content' keys. warnings.warn( Will cache examples in '/home/user/app/.gradio/cached_examples/22' directory at first use. ZeroGPU tensors packing: 0%| | 0.00/29.3G [00:00<?, ?B/s][A ZeroGPU tensors packing: 0%| | 0.00/29.3G [00:00<?, ?B/s] Traceback (most recent call last): File "/home/user/app/app.py", line 106, in <module> demo.launch() File "/usr/local/lib/python3.10/site-packages/spaces/zero/gradio.py", line 142, in launch task(*task_args, **task_kwargs) File "/usr/local/lib/python3.10/site-packages/spaces/zero/torch/patching.py", line 344, in pack _pack(Config.zerogpu_offload_dir) File "/usr/local/lib/python3.10/site-packages/spaces/zero/torch/patching.py", line 336, in _pack pack = pack_tensors(originals, fakes, offload_dir, callback=update) File "/usr/local/lib/python3.10/site-packages/spaces/zero/torch/packing.py", line 114, in pack_tensors os.posix_fallocate(fd, 0, total_asize) OSError: [Errno 28] No space left on device
Container logs:
Fetching error logs...