runtime error
Exit code: 1. Reason: Using device: cuda adapter_config.json: 0%| | 0.00/988 [00:00<?, ?B/s][A adapter_config.json: 100%|██████████| 988/988 [00:00<00:00, 7.13MB/s] config.json: 0%| | 0.00/1.97k [00:00<?, ?B/s][A config.json: 100%|██████████| 1.97k/1.97k [00:00<00:00, 20.8MB/s] model.safetensors: 0%| | 0.00/967M [00:00<?, ?B/s][A model.safetensors: 10%|▉ | 96.7M/967M [00:01<00:09, 95.5MB/s][A model.safetensors: 65%|██████▌ | 631M/967M [00:02<00:00, 349MB/s] [A model.safetensors: 100%|█████████▉| 967M/967M [00:02<00:00, 405MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 40, in <module> model = WhisperForConditionalGeneration.from_pretrained(peft_config.base_model_name_or_path, device_map=device) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4014, in from_pretrained ) = cls._load_pretrained_model( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4502, in _load_pretrained_model new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 973, in _load_state_dict_into_meta_model set_module_tensor_to_device(model, param_name, param_device, **set_module_kwargs) File "/usr/local/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 329, in set_module_tensor_to_device new_value = value.to(device) File "/usr/local/lib/python3.10/site-packages/torch/cuda/__init__.py", line 293, in _lazy_init torch._C._cuda_init() RuntimeError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx
Container logs:
Fetching error logs...