File size: 1,584 Bytes
06a60a3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
##
<pre>
compute_environment: LOCAL_MACHINE
deepspeed_config: {}
+distributed_type: MULTI_GPU
downcast_bf16: 'no'
dynamo_backend: 'NO'
fsdp_config: {}
+gpu_ids: all
+machine_rank: 0
main_training_function: main
megatron_lm_config: {}
mixed_precision: 'no'
+num_machines: 1
+num_processes: 4
+rdzv_backend: static
+same_network: true
use_cpu: false</pre>
##
None
##
If the YAML was generated through the `accelerate config` command:
```
accelerate launch {script_name.py} {--arg1} {--arg2} ...
```

If the YAML is saved to a `~/config.yaml` file:
```
accelerate launch --config_file ~/config.yaml {script_name.py} {--arg1} {--arg2} ...
```

Or you can use `accelerate launch` with right configuration parameters and have no `config.yaml` file:
```
accelerate launch --multi_gpu --num_processes=4 {script_name.py} {--arg1} {--arg2} ...
```

##
Launching on multi-GPU instances requires a different launch command than just `python myscript.py`. Accelerate will wrap around the proper launching script to delegate and call, reading in how to set their configuration based on the parameters passed in. It is a passthrough to the `torchrun` command.

**Remember that you can always use the `accelerate launch` functionality, even if the code in your script does not use the `Accelerator`**
##
To learn more checkout the related documentation:
- <a href="https://huggingface.co/docs/accelerate/main/en/basic_tutorials/launch" target="_blank">Launching distributed code</a>
- <a href="https://huggingface.co/docs/accelerate/main/en/package_reference/cli" target="_blank">The Command Line</a>