Spaces:
Running
on
CPU Upgrade
Running
on
CPU Upgrade
## | |
<pre> | |
compute_environment: LOCAL_MACHINE | |
deepspeed_config: {} | |
+distributed_type: MULTI_GPU | |
downcast_bf16: 'no' | |
dynamo_backend: 'NO' | |
fsdp_config: {} | |
+gpu_ids: all | |
+machine_rank: 0 | |
main_training_function: main | |
megatron_lm_config: {} | |
mixed_precision: 'no' | |
+num_machines: 1 | |
+num_processes: 4 | |
+rdzv_backend: static | |
+same_network: true | |
use_cpu: false</pre> | |
## | |
None | |
## | |
If the YAML was generated through the `accelerate config` command: | |
``` | |
accelerate launch {script_name.py} {--arg1} {--arg2} ... | |
``` | |
If the YAML is saved to a `~/config.yaml` file: | |
``` | |
accelerate launch --config_file ~/config.yaml {script_name.py} {--arg1} {--arg2} ... | |
``` | |
Or you can use `accelerate launch` with right configuration parameters and have no `config.yaml` file: | |
``` | |
accelerate launch --multi_gpu --num_processes=4 {script_name.py} {--arg1} {--arg2} ... | |
``` | |
## | |
Launching on multi-GPU instances requires a different launch command than just `python myscript.py`. Accelerate will wrap around the proper launching script to delegate and call, reading in how to set their configuration based on the parameters passed in. It is a passthrough to the `torchrun` command. | |
**Remember that you can always use the `accelerate launch` functionality, even if the code in your script does not use the `Accelerator`** | |
## | |
To learn more checkout the related documentation: | |
- <a href="https://huggingface.co/docs/accelerate/main/en/basic_tutorials/launch" target="_blank">Launching distributed code</a> | |
- <a href="https://huggingface.co/docs/accelerate/main/en/package_reference/cli" target="_blank">The Command Line</a> |