Model parameters: d_model 1792 ffw_size 7168 kv_size 128 n_heads 14 n_layers 26 Megatron-DeepSpeed/pretrain_gpt.py --tensor-model-parallel-size 1 --pipeline-model-parallel-size 1 --num-layers 26 --hidden-size 1792 --num-attention-heads 14 --kv-channels 128 --ffn-hidden-size 7168 --seq-length 2048 --max-position-embeddings 2048 --micro-batch-size 4 --global-batch-size 256 --train-samples 740_269 --vocab-file gpt2/vocab.json --merge-file gpt2/merges.txt --loss-scale 12 --clip-grad 1.0 --kill-switch-path kill-switch-1b1 --bf16 --optimizer adam --adam-beta1 0.9 --adam-beta2 0.999 --adam-eps 1e-8 --lr 2e-4 --min-lr 2e-5 --lr-decay-style cosine --lr-decay-samples 740_269 --lr-warmup-samples 7403 --clip-grad 1.0 --weight-decay 1e-1 --log-interval 10 --save-interval 1000 --eval-interval 1000 --eval-iters 1 --tensorboard-dir tensorboard_1b1 --tensorboard-queue-size 5 --log-timers-to-tensorboard --log-batch-size-to-tensorboard --log-validation-ppl-to-tensorboard --save checkpoints_1b1 --load checkpoints_1b1 --data-path /scratch/project_462000119/data/pile/megatron_data/meg-gpt2_pile_text_document --data-impl mmap --split 949,50,1 --deepspeed --deepspeed_config ds_configs/2068467.json --zero-stage 0 START 2068467: Thu Nov 24 20:22:00 EET 2022 0: 0: 0: ======================= ROCm System Management Interface ======================= 0: ================================= Concise Info ================================= 0: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 0: 0 42.0c 91.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 0: 1 47.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 0: 2 46.0c 86.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 0: 3 42.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 0: 4 44.0c 84.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 0: 5 44.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 0: 6 40.0c 89.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 0: 7 46.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 0: ================================================================================ 0: ============================= End of ROCm SMI Log ============================== 6: 6: 6: ======================= ROCm System Management Interface ======================= 6: ================================= Concise Info ================================= 6: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 6: 0 47.0c 87.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 6: 1 50.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 6: 2 47.0c 93.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 6: 3 45.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 6: 4 43.0c 99.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 6: 5 51.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 6: 6 36.0c 94.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 6: 7 42.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 6: ================================================================================ 6: ============================= End of ROCm SMI Log ============================== 1: 1: 1: ======================= ROCm System Management Interface ======================= 1: ================================= Concise Info ================================= 1: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 1: 0 47.0c 84.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 1: 1 45.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 1: 2 38.0c 84.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 1: 3 40.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 1: 4 43.0c 85.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 1: 5 43.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 1: 6 41.0c 88.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 1: 7 48.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 1: ================================================================================ 1: ============================= End of ROCm SMI Log ============================== 3: 3: 3: ======================= ROCm System Management Interface ======================= 3: ================================= Concise Info ================================= 3: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 3: 0 40.0c 94.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 3: 1 45.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 3: 2 39.0c 89.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 3: 3 48.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 3: 4 43.0c 83.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 3: 5 44.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 3: 6 40.0c 89.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 3: 7 51.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 3: ================================================================================ 3: ============================= End of ROCm SMI Log ============================== 7: 7: 7: ======================= ROCm System Management Interface ======================= 7: ================================= Concise Info ================================= 7: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 7: 0 40.0c 91.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 7: 1 49.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 7: 2 44.0c 95.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 7: 3 42.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 7: 4 45.0c 94.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 7: 5 47.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 7: 6 40.0c 89.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 7: 7 42.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 7: ================================================================================ 7: ============================= End of ROCm SMI Log ============================== 2: 2: 2: ======================= ROCm System Management Interface ======================= 2: ================================= Concise Info ================================= 2: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 2: 0 46.0c 89.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 2: 1 44.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 2: 2 43.0c 84.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 2: 3 44.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 2: 4 49.0c 89.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 2: 5 49.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 2: 6 40.0c 87.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 2: 7 51.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 2: ================================================================================ 2: ============================= End of ROCm SMI Log ============================== 5: 5: 5: ======================= ROCm System Management Interface ======================= 5: ================================= Concise Info ================================= 5: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 5: 0 46.0c 92.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 5: 1 48.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 5: 2 38.0c 93.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 5: 3 48.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 5: 4 47.0c 89.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 5: 5 39.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 5: 6 46.0c 89.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 5: 7 44.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 5: ================================================================================ 5: ============================= End of ROCm SMI Log ============================== 4: 4: 4: ======================= ROCm System Management Interface ======================= 4: ================================= Concise Info ================================= 4: GPU Temp AvgPwr SCLK MCLK Fan Perf PwrCap VRAM% GPU% 4: 0 41.0c 95.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 4: 1 45.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 4: 2 43.0c 89.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 4: 3 43.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 4: 4 44.0c 85.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 4: 5 42.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 4: 6 41.0c 89.0W 800Mhz 1600Mhz 0% auto 560.0W 0% 0% 4: 7 42.0c N/A 800Mhz 1600Mhz 0% auto 0.0W 0% 0% 4: ================================================================================ 4: ============================= End of ROCm SMI Log ============================== 5: Launching on nid005046 (5/8), master nid005000 port 9999, GPUs 8, CUDA: True 3: Launching on nid005003 (3/8), master nid005000 port 9999, GPUs 8, CUDA: True 2: Launching on nid005002 (2/8), master nid005000 port 9999, GPUs 8, CUDA: True 7: Launching on nid005048 (7/8), master nid005000 port 9999, GPUs 8, CUDA: True 6: Launching on nid005047 (6/8), master nid005000 port 9999, GPUs 8, CUDA: True 4: Launching on nid005004 (4/8), master nid005000 port 9999, GPUs 8, CUDA: True 0: Launching on nid005000 (0/8), master nid005000 port 9999, GPUs 8, CUDA: True 1: Launching on nid005001 (1/8), master nid005000 port 9999, GPUs 8, CUDA: True 0: using world size: 64, data-parallel-size: 64, tensor-model-parallel size: 1, pipeline-model-parallel size: 1 0: accumulate and all-reduce gradients in fp32 for bfloat16 data type. 0: using torch.bfloat16 for parameters ... 0: ------------------------ arguments ------------------------ 0: abort_on_unmet_fused_kernel_constraints ......... False 0: accumulate_allreduce_grads_in_fp32 .............. True 0: adam_beta1 ...................................... 0.9 0: adam_beta2 ...................................... 0.999 0: adam_eps ........................................ 1e-08 0: adlr_autoresume ................................. False 0: adlr_autoresume_interval ........................ 1000 0: apply_query_key_layer_scaling ................... True 0: apply_residual_connection_post_layernorm ........ False 0: attention_dropout ............................... 0.1 0: attention_softmax_in_fp32 ....................... False 0: bert_binary_head ................................ True 0: bert_load ....................................... None 0: bf16 ............................................ True 0: bias_dropout_fusion ............................. True 0: bias_gelu_fusion ................................ True 0: biencoder_projection_dim ........................ 0 0: biencoder_shared_query_context_model ............ False 0: block_data_path ................................. None 0: checkpoint_activations .......................... False 0: checkpoint_in_cpu ............................... False 0: checkpoint_num_layers ........................... 1 0: clip_grad ....................................... 1.0 0: codecarbon_dir .................................. None 0: consumed_train_samples .......................... 0 0: consumed_train_tokens ........................... 0 0: consumed_valid_samples .......................... 0 0: contigious_checkpointing ........................ False 0: cpu_optimizer ................................... False 0: cpu_torch_adam .................................. False 0: curriculum_learning ............................. False 0: data_impl ....................................... mmap 0: data_parallel_size .............................. 64 0: data_path ....................................... ['/scratch/project_462000119/data/pile/megatron_data/meg-gpt2_pile_text_document'] 0: dataloader_type ................................. single 0: DDP_impl ........................................ local 0: decoder_seq_length .............................. None 0: deepscale ....................................... False 0: deepscale_config ................................ None 0: deepspeed ....................................... True 0: deepspeed_activation_checkpointing .............. False 0: deepspeed_config ................................ ds_configs/2068467.json 0: deepspeed_mpi ................................... False 0: distribute_checkpointed_activations ............. False 0: distributed_backend ............................. nccl 0: embed_layernorm ................................. False 0: embedding_path .................................. None 0: encoder_seq_length .............................. 2048 0: eod_mask_loss ................................... False 0: eval_interval ................................... 1000 0: eval_iters ...................................... 1 0: eval_only ....................................... None 0: evidence_data_path .............................. None 0: exit_duration_in_mins ........................... None 0: exit_interval ................................... None 0: ffn_hidden_size ................................. 7168 0: finetune ........................................ False 0: fp16 ............................................ False 0: fp16_lm_cross_entropy ........................... False 0: fp32_residual_connection ........................ False 0: gigaflos_no_embeds .............................. 0 0: global_batch_size ............................... 256 0: glu_activation .................................. None 0: hidden_dropout .................................. 0.1 0: hidden_size ..................................... 1792 0: hysteresis ...................................... 2 0: ict_head_size ................................... None 0: ict_load ........................................ None 0: img_dim ......................................... 224 0: indexer_batch_size .............................. 128 0: indexer_log_interval ............................ 1000 0: inference ....................................... False 0: init_method_std ................................. 0.02 0: init_method_xavier_uniform ...................... False 0: initial_loss_scale .............................. 4294967296 0: kill_switch_path ................................ kill-switch-1b1 0: kv_channels ..................................... 128 0: layer_norm_fusion ............................... True 0: layernorm_epsilon ............................... 1e-05 0: lazy_mpu_init ................................... None 0: load ............................................ checkpoints_1b1 0: local_rank ...................................... None 0: log_batch_size_to_tensorboard ................... True 0: log_interval .................................... 10 0: log_learning_rate_to_tensorboard ................ True 0: log_level ....................................... None 0: log_level_replica ............................... None 0: log_loss_scale_to_tensorboard ................... True 0: log_num_zeros_in_grad ........................... False 0: log_params_norm ................................. False 0: log_path ........................................ None 0: log_timers_to_tensorboard ....................... True 0: log_validation_ppl_to_tensorboard ............... True 0: loss_on_targets_only ............................ False 0: loss_scale ...................................... 12.0 0: loss_scale_window ............................... 1000 0: lr .............................................. 0.0002 0: lr_decay_iters .................................. None 0: lr_decay_samples ................................ 740269 0: lr_decay_style .................................. cosine 0: lr_decay_tokens ................................. None 0: lr_warmup_fraction .............................. None 0: lr_warmup_iters ................................. 0 0: lr_warmup_samples ............................... 7403 0: make_vocab_size_divisible_by .................... 128 0: mask_prob ....................................... 0.15 0: masked_softmax_fusion ........................... True 0: max_position_embeddings ......................... 2048 0: mean_noise_span_length .......................... None 0: memory_centric_tiled_linear ..................... False 0: merge_file ...................................... gpt2/merges.txt 0: micro_batch_size ................................ 4 0: min_loss_scale .................................. 1.0 0: min_lr .......................................... 2e-05 0: mmap_warmup ..................................... False 0: no_load_optim ................................... None 0: no_load_rng ..................................... None 0: no_save_optim ................................... None 0: no_save_rng ..................................... None 0: noise_density ................................... None 0: num_attention_heads ............................. 14 0: num_channels .................................... 3 0: num_classes ..................................... 1000 0: num_layers ...................................... 26 0: num_layers_per_virtual_pipeline_stage ........... None 0: num_workers ..................................... 2 0: onnx_safe ....................................... None 0: openai_gelu ..................................... False 0: optimizer ....................................... adam 0: optimizer_fusion ................................ True 0: override_lr_scheduler ........................... False 0: pad_vocab_size_to ............................... None 0: params_dtype .................................... torch.bfloat16 0: partition_activations ........................... False 0: patch_dim ....................................... 16 0: pipeline_model_parallel_size .................... 1 0: position_embedding_type ......................... PositionEmbeddingType.absolute 0: pp_partition_method ............................. None 0: profile_backward ................................ False 0: query_in_block_prob ............................. 0.1 0: rampup_batch_size ............................... None 0: rank ............................................ 0 0: remote_device ................................... none 0: reset_attention_mask ............................ False 0: reset_position_ids .............................. False 0: retriever_report_topk_accuracies ................ [] 0: retriever_score_scaling ......................... False 0: retriever_seq_length ............................ 256 0: reweight_loss_based_on_position_frequency ....... False 0: sample_rate ..................................... 1.0 0: save ............................................ checkpoints_1b1 0: save_interval ................................... 1000 0: scatter_gather_tensors_in_pipeline .............. True 0: scattered_embeddings ............................ False 0: seed ............................................ 1234 0: seq_length ...................................... 2048 0: sgd_momentum .................................... 0.9 0: short_seq_prob .................................. 0.1 0: skip_train_iteration_range ...................... None 0: split ........................................... 949,50,1 0: split_transformers .............................. False 0: sync_tp_duplicated_parameters ................... False 0: synchronize_each_layer .......................... False 0: tensor_model_parallel_size ...................... 1 0: tensorboard_dir ................................. tensorboard_1b1 0: tensorboard_log_interval ........................ 1 0: tensorboard_queue_size .......................... 5 0: test_weighted_split_names ....................... None 0: test_weighted_split_paths ....................... None 0: test_weighted_split_paths_path .................. None 0: test_weighted_split_splits ...................... None 0: test_weighted_split_weights ..................... None 0: tile_factor ..................................... 1 0: titles_data_path ................................ None 0: tokenizer_name_or_path .......................... None 0: tokenizer_type .................................. GPT2BPETokenizer 0: train_iters ..................................... None 0: train_samples ................................... 740269 0: train_tokens .................................... None 0: train_weighted_split_paths ...................... None 0: train_weighted_split_paths_path ................. None 0: universal_checkpoint ............................ False 0: use_bnb_optimizer ............................... False 0: use_checkpoint_lr_scheduler ..................... False 0: use_contiguous_buffers_in_ddp ................... True 0: use_cpu_initialization .......................... None 0: use_one_sent_docs ............................... False 0: use_pin_memory .................................. False 0: valid_num_workers ............................... 2 0: valid_weighted_split_names ...................... None 0: valid_weighted_split_paths ...................... None 0: valid_weighted_split_paths_path ................. None 0: valid_weighted_split_splits ..................... None 0: valid_weighted_split_weights .................... None 0: virtual_pipeline_model_parallel_size ............ None 0: vocab_extra_ids ................................. 0 0: vocab_file ...................................... gpt2/vocab.json 0: weight_decay .................................... 0.1 0: world_size ...................................... 64 0: zero_allgather_bucket_size ...................... 0.0 0: zero_contigious_gradients ....................... False 0: zero_reduce_bucket_size ......................... 0.0 0: zero_reduce_scatter ............................. False 0: zero_stage ...................................... 0 0: -------------------- end of arguments --------------------- 0: setting number of micro-batches to constant 1 0: > building GPT2BPETokenizer tokenizer ... 0: > padded vocab (size: 50257) with 47 dummy tokens (new size: 50304) 0: DeepSpeed general environment info: 0: torch install path ............... ['/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/lib/python3.9/site-packages/torch'] 0: torch version .................... 1.13.0+rocm5.2 0: torch cuda version ............... None 0: torch hip version ................ 5.2.21151-afdc89f8 0: nvcc version ..................... None 0: deepspeed install path ........... ['/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/lib/python3.9/site-packages/deepspeed'] 0: deepspeed info ................... 0.7.5, unknown, unknown 0: deepspeed wheel compiled w. ...... torch 1.13, hip 5.1 0: **** Git info for Megatron: git_hash=unknown git_branch=unknown **** 0: > initializing torch distributed ... 0: [2022-11-24 20:23:07,768] [INFO] [comm.py:633:init_distributed] Initializing TorchBackend in DeepSpeed with backend nccl 7: > setting tensorboard ... 0: > initializing tensor model parallel with size 1 0: > initializing pipeline model parallel with size 1 0: > setting random seeds to 1234 ... 0: > initializing model parallel cuda seeds on global rank 0, model parallel rank 0, and data parallel rank 0 with model parallel seed: 3952 and data parallel seed: 1234 0: > compiling dataset index builder ... 0: make: Entering directory '/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/data' 0: make: Nothing to be done for 'default'. 0: make: Leaving directory '/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/data' 0: >>> done with dataset index builder. Compilation time: 0.100 seconds 0: > compiling and loading fused kernels ... 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.cpp -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.cpp [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_cuda.cu -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.hip [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified] 0: Total number of unsupported CUDA function calls: 0 0: 0: 0: Total number of replaced kernel launches: 87 0: ninja: no work to do. 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.cpp -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.cpp [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_cuda.cu -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.hip [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified] 0: Total number of unsupported CUDA function calls: 0 0: 0: 0: Total number of replaced kernel launches: 63 0: [1/1] c++ scaled_masked_softmax_hip.cuda.o scaled_masked_softmax_hip.o -shared -L/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/lib/python3.9/site-packages/torch/lib -lc10 -lc10_hip -ltorch_cpu -ltorch_hip -ltorch -ltorch_python -L/pfs/lustrep2/projappl/project_462000125/samantao-public/rocm/rocm-5.2.3/lib -lamdhip64 -o scaled_masked_softmax_cuda.so 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda.cpp -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda.cpp [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_cuda_kernel.cu -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/layer_norm_hip_kernel.hip [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/type_shim.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/compat.h [skipped, no changes] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_upper_triang_masked_softmax_hip.h [skipped, already hipified] 0: /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax.h -> /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/Megatron-DeepSpeed/megatron/fused_kernels/scaled_masked_softmax_hip.h [skipped, already hipified] 0: Total number of unsupported CUDA function calls: 0 0: 0: 0: Total number of replaced kernel launches: 67 0: ninja: no work to do. 0: >>> done with compiling and loading fused kernels. Compilation time: 19.557 seconds 0: time to initialize megatron (seconds): 36.732 0: [after megatron is initialized] datetime: 2022-11-24 20:23:32 0: building GPT model ... 0: [2022-11-24 20:23:32,784] [INFO] [utils.py:827:see_memory_usage] Before Building Model 0: [2022-11-24 20:23:32,785] [INFO] [utils.py:828:see_memory_usage] MA 0.0 GB Max_MA 0.0 GB CA 0.0 GB Max_CA 0 GB 0: [2022-11-24 20:23:32,785] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 30.17 GB, percent = 6.0% 0: SEED_LAYERS=False BASE_SEED=1234 SEED_FN=None 0: Using topology: {ProcessCoord(pipe=0, data=0, model=0): 0, ProcessCoord(pipe=0, data=1, model=0): 1, ProcessCoord(pipe=0, data=2, model=0): 2, ProcessCoord(pipe=0, data=3, model=0): 3, ProcessCoord(pipe=0, data=4, model=0): 4, ProcessCoord(pipe=0, data=5, model=0): 5, ProcessCoord(pipe=0, data=6, model=0): 6, ProcessCoord(pipe=0, data=7, model=0): 7, ProcessCoord(pipe=0, data=8, model=0): 8, ProcessCoord(pipe=0, data=9, model=0): 9, ProcessCoord(pipe=0, data=10, model=0): 10, ProcessCoord(pipe=0, data=11, model=0): 11, ProcessCoord(pipe=0, data=12, model=0): 12, ProcessCoord(pipe=0, data=13, model=0): 13, ProcessCoord(pipe=0, data=14, model=0): 14, ProcessCoord(pipe=0, data=15, model=0): 15, ProcessCoord(pipe=0, data=16, model=0): 16, ProcessCoord(pipe=0, data=17, model=0): 17, ProcessCoord(pipe=0, data=18, model=0): 18, ProcessCoord(pipe=0, data=19, model=0): 19, ProcessCoord(pipe=0, data=20, model=0): 20, ProcessCoord(pipe=0, data=21, model=0): 21, ProcessCoord(pipe=0, data=22, model=0): 22, ProcessCoord(pi 0: pe=0, data=23, model=0): 23, ProcessCoord(pipe=0, data=24, model=0): 24, ProcessCoord(pipe=0, data=25, model=0): 25, ProcessCoord(pipe=0, data=26, model=0): 26, ProcessCoord(pipe=0, data=27, model=0): 27, ProcessCoord(pipe=0, data=28, model=0): 28, ProcessCoord(pipe=0, data=29, model=0): 29, ProcessCoord(pipe=0, data=30, model=0): 30, ProcessCoord(pipe=0, data=31, model=0): 31, ProcessCoord(pipe=0, data=32, model=0): 32, ProcessCoord(pipe=0, data=33, model=0): 33, ProcessCoord(pipe=0, data=34, model=0): 34, ProcessCoord(pipe=0, data=35, model=0): 35, ProcessCoord(pipe=0, data=36, model=0): 36, ProcessCoord(pipe=0, data=37, model=0): 37, ProcessCoord(pipe=0, data=38, model=0): 38, ProcessCoord(pipe=0, data=39, model=0): 39, ProcessCoord(pipe=0, data=40, model=0): 40, ProcessCoord(pipe=0, data=41, model=0): 41, ProcessCoord(pipe=0, data=42, model=0): 42, ProcessCoord(pipe=0, data=43, model=0): 43, ProcessCoord(pipe=0, data=44, model=0): 44, ProcessCoord(pipe=0, data=45, model=0): 45, ProcessCoord(pipe=0, data=4 0: 6, model=0): 46, ProcessCoord(pipe=0, data=47, model=0): 47, ProcessCoord(pipe=0, data=48, model=0): 48, ProcessCoord(pipe=0, data=49, model=0): 49, ProcessCoord(pipe=0, data=50, model=0): 50, ProcessCoord(pipe=0, data=51, model=0): 51, ProcessCoord(pipe=0, data=52, model=0): 52, ProcessCoord(pipe=0, data=53, model=0): 53, ProcessCoord(pipe=0, data=54, model=0): 54, ProcessCoord(pipe=0, data=55, model=0): 55, ProcessCoord(pipe=0, data=56, model=0): 56, ProcessCoord(pipe=0, data=57, model=0): 57, ProcessCoord(pipe=0, data=58, model=0): 58, ProcessCoord(pipe=0, data=59, model=0): 59, ProcessCoord(pipe=0, data=60, model=0): 60, ProcessCoord(pipe=0, data=61, model=0): 61, ProcessCoord(pipe=0, data=62, model=0): 62, ProcessCoord(pipe=0, data=63, model=0): 63} 0: [2022-11-24 20:23:34,820] [INFO] [module.py:366:_partition_layers] Partitioning pipeline stages with method type:transformer 0: stage=0 layers=33 0: 0: _to_float16 0: 1: EmbeddingPipe 0: 2: 0: 3: ParallelTransformerLayerPipe 0: 4: ParallelTransformerLayerPipe 0: 5: ParallelTransformerLayerPipe 0: 6: ParallelTransformerLayerPipe 0: 7: ParallelTransformerLayerPipe 0: 8: ParallelTransformerLayerPipe 0: 9: ParallelTransformerLayerPipe 0: 10: ParallelTransformerLayerPipe 0: 11: ParallelTransformerLayerPipe 0: 12: ParallelTransformerLayerPipe 0: 13: ParallelTransformerLayerPipe 0: 14: ParallelTransformerLayerPipe 0: 15: ParallelTransformerLayerPipe 0: 16: ParallelTransformerLayerPipe 0: 17: ParallelTransformerLayerPipe 0: 18: ParallelTransformerLayerPipe 0: 19: ParallelTransformerLayerPipe 0: 20: ParallelTransformerLayerPipe 0: 21: ParallelTransformerLayerPipe 0: 22: ParallelTransformerLayerPipe 0: 23: ParallelTransformerLayerPipe 0: 24: ParallelTransformerLayerPipe 0: 25: ParallelTransformerLayerPipe 0: 26: ParallelTransformerLayerPipe 0: 27: ParallelTransformerLayerPipe 0: 28: ParallelTransformerLayerPipe 0: 29: undo 0: 30: MixedFusedLayerNorm 0: 31: EmbeddingPipe 0: 32: float16_to_fp32 0: loss: CrossEntropy 0: [2022-11-24 20:23:35,450] [INFO] [utils.py:827:see_memory_usage] After Building Model 0: [2022-11-24 20:23:35,450] [INFO] [utils.py:828:see_memory_usage] MA 2.05 GB Max_MA 2.05 GB CA 2.19 GB Max_CA 2 GB 0: [2022-11-24 20:23:35,450] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 30.19 GB, percent = 6.0% 0: setting training iterations to 2891 0: > learning rate decay style: cosine 0: DeepSpeed is enabled. 0: [2022-11-24 20:23:35,453] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed info: version=0.7.5, git-hash=unknown, git-branch=unknown 0: [2022-11-24 20:23:48,855] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Flops Profiler Enabled: False 0: [2022-11-24 20:23:48,855] [INFO] [logging.py:68:log_dist] [Rank 0] Removing param_group that has no 'params' in the client Optimizer 0: [2022-11-24 20:23:48,855] [INFO] [logging.py:68:log_dist] [Rank 0] Using client Optimizer as basic optimizer 0: [2022-11-24 20:23:48,873] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Basic Optimizer = FusedAdam 0: [2022-11-24 20:23:48,873] [INFO] [logging.py:68:log_dist] [Rank 0] Creating BF16 optimizer 0: [2022-11-24 20:23:48,914] [INFO] [utils.py:827:see_memory_usage] begin bf16_optimizer 0: [2022-11-24 20:23:48,915] [INFO] [utils.py:828:see_memory_usage] MA 2.04 GB Max_MA 2.06 GB CA 2.19 GB Max_CA 2 GB 0: [2022-11-24 20:23:48,915] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 30.88 GB, percent = 6.1% 7: ninja: no work to do. 7: Time to load utils op: 0.21609926223754883 seconds 6: ninja: no work to do. 6: Time to load utils op: 0.1269679069519043 seconds 3: Time to load utils op: 0.311692476272583 seconds 3: Time to load utils op: 0.20229244232177734 seconds 3: Time to load utils op: 0.202559232711792 seconds 3: Time to load utils op: 0.20256304740905762 seconds 3: Time to load utils op: 0.20277762413024902 seconds 3: Time to load utils op: 0.20265674591064453 seconds 3: Time to load utils op: 0.20296168327331543 seconds 3: Time to load utils op: 0.20312070846557617 seconds 0: Time to load utils op: 0.20792245864868164 seconds 0: Time to load utils op: 0.20650672912597656 seconds 0: Time to load utils op: 0.20940756797790527 seconds 0: Time to load utils op: 0.20720505714416504 seconds 0: Time to load utils op: 0.2080376148223877 seconds 0: Time to load utils op: 0.20946335792541504 seconds 0: Time to load utils op: 0.3074929714202881 seconds 0: Time to load utils op: 0.20802569389343262 seconds 7: Time to load utils op: 0.20307350158691406 seconds 7: Time to load utils op: 0.2023029327392578 seconds 7: Time to load utils op: 0.20310330390930176 seconds 7: Time to load utils op: 0.20270156860351562 seconds 7: Time to load utils op: 0.20250558853149414 seconds 7: Time to load utils op: 0.2032179832458496 seconds 6: Time to load utils op: 0.20524263381958008 seconds 7: Time to load utils op: 0.20299482345581055 seconds 6: Time to load utils op: 0.20513343811035156 seconds 6: Time to load utils op: 0.20574712753295898 seconds 6: Time to load utils op: 0.20684528350830078 seconds 6: Time to load utils op: 0.20530319213867188 seconds 6: Time to load utils op: 0.20628738403320312 seconds 6: Time to load utils op: 0.20551657676696777 seconds 2: Time to load utils op: 0.21174883842468262 secondsTime to load utils op: 0.2110891342163086 seconds 2: 2: Time to load utils op: 0.21237683296203613 seconds 2: Time to load utils op: 0.2117292881011963 seconds 2: Time to load utils op: 0.212723970413208 secondsTime to load utils op: 0.21134257316589355 secondsTime to load utils op: 0.21212434768676758 seconds 2: 2: 2: Time to load utils op: 0.21165919303894043 seconds 5: Time to load utils op: 0.21256446838378906 seconds 5: Time to load utils op: 0.21172618865966797 seconds 5: Time to load utils op: 0.21220660209655762 seconds 5: Time to load utils op: 0.2125389575958252 seconds 5: Time to load utils op: 0.2125682830810547 secondsTime to load utils op: 0.21261978149414062 seconds 5: 5: Time to load utils op: 0.21309399604797363 seconds 5: Time to load utils op: 0.21202683448791504 seconds 1: Time to load utils op: 0.21275568008422852 seconds 1: Time to load utils op: 0.21276402473449707 seconds 1: Time to load utils op: 0.21280741691589355 seconds 1: Time to load utils op: 0.21281003952026367 seconds 1: Time to load utils op: 0.21282672882080078 secondsTime to load utils op: 0.2128286361694336 seconds 1: 1: Time to load utils op: 0.2128441333770752 seconds 1: Time to load utils op: 0.21284055709838867 seconds 4: Time to load utils op: 0.21168279647827148 seconds 4: Time to load utils op: 0.21170520782470703 secondsTime to load utils op: 0.2117021083831787 seconds 4: 4: Time to load utils op: 0.21172142028808594 seconds 4: Time to load utils op: 0.21173739433288574 seconds 4: Time to load utils op: 0.21173787117004395 seconds 4: Time to load utils op: 0.21174240112304688 secondsTime to load utils op: 0.2117459774017334 seconds 4: 0: [2022-11-24 20:23:49,256] [INFO] [utils.py:827:see_memory_usage] before initializing group 0 0: [2022-11-24 20:23:49,256] [INFO] [utils.py:828:see_memory_usage] MA 2.04 GB Max_MA 2.04 GB CA 2.19 GB Max_CA 2 GB 0: [2022-11-24 20:23:49,256] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 30.88 GB, percent = 6.1% 3: Time to load utils op: 0.0005133152008056641 seconds 3: Time to load utils op: 0.0005700588226318359 seconds 3: Time to load utils op: 0.0004372596740722656 seconds 3: Time to load utils op: 0.0004811286926269531 seconds 3: Time to load utils op: 0.0004782676696777344 seconds 3: Time to load utils op: 0.0004642009735107422 seconds 3: Time to load utils op: 0.0006365776062011719 seconds 3: Time to load utils op: 0.0006463527679443359 seconds 1: Time to load utils op: 0.0009274482727050781 seconds 1: Time to load utils op: 0.0008521080017089844 seconds 0: [2022-11-24 20:23:49,507] [INFO] [utils.py:827:see_memory_usage] after initializing group 0 1: Time to load utils op: 0.0009047985076904297 seconds 1: Time to load utils op: 0.0006520748138427734 seconds 1: Time to load utils op: 0.0009334087371826172 seconds 1: Time to load utils op: 0.0010607242584228516 seconds 1: Time to load utils op: 0.0009770393371582031 seconds 1: Time to load utils op: 0.0006129741668701172 seconds 0: [2022-11-24 20:23:49,508] [INFO] [utils.py:828:see_memory_usage] MA 4.24 GB Max_MA 4.24 GB CA 5.44 GB Max_CA 5 GB 0: [2022-11-24 20:23:49,508] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 30.89 GB, percent = 6.1% 7: Time to load utils op: 0.0005197525024414062 seconds 7: Time to load utils op: 0.0005347728729248047 seconds 7: Time to load utils op: 0.0005235671997070312 seconds 7: Time to load utils op: 0.0005702972412109375 seconds 7: Time to load utils op: 0.000568389892578125 seconds 7: Time to load utils op: 0.0006320476531982422 seconds 7: Time to load utils op: 0.0005376338958740234 seconds 0: Time to load utils op: 0.0005240440368652344 seconds 0: Time to load utils op: 0.00043487548828125 seconds 0: Time to load utils op: 0.0005767345428466797 seconds 0: Time to load utils op: 0.0005452632904052734 seconds 7: Time to load utils op: 0.00046181678771972656 seconds 0: Time to load utils op: 0.00041222572326660156 seconds 0: Time to load utils op: 0.00042128562927246094 seconds 0: Time to load utils op: 0.0004417896270751953 seconds 2: Time to load utils op: 0.0007309913635253906 seconds 2: Time to load utils op: 0.0009577274322509766 seconds 2: Time to load utils op: 0.0011696815490722656 seconds 2: Time to load utils op: 0.0011599063873291016 seconds 2: Time to load utils op: 0.0011050701141357422 seconds 2: Time to load utils op: 0.0009145736694335938 seconds 2: Time to load utils op: 0.0010519027709960938 seconds 2: Time to load utils op: 0.0010685920715332031 seconds 4: Time to load utils op: 0.0010120868682861328 seconds 4: Time to load utils op: 0.001016855239868164 seconds 4: Time to load utils op: 0.0012238025665283203 seconds 4: Time to load utils op: 0.0012431144714355469 seconds 4: Time to load utils op: 0.0013909339904785156 seconds 4: Time to load utils op: 0.0013432502746582031 seconds 4: Time to load utils op: 0.0013623237609863281 seconds 4: Time to load utils op: 0.0013179779052734375 seconds 5: Time to load utils op: 0.0011153221130371094 seconds 5: Time to load utils op: 0.0013453960418701172 seconds 6: Time to load utils op: 0.0004634857177734375 seconds 5: Time to load utils op: 0.0017385482788085938 seconds 6: Time to load utils op: 0.0004413127899169922 seconds 5: Time to load utils op: 0.0017087459564208984 secondsTime to load utils op: 0.0018095970153808594 seconds 6: Time to load utils op: 0.0004107952117919922 seconds 5: 5: Time to load utils op: 0.0016875267028808594 secondsTime to load utils op: 0.00173187255859375 seconds 5: 5: Time to load utils op: 0.0018622875213623047 seconds 6: Time to load utils op: 0.0005204677581787109 seconds 6: Time to load utils op: 0.00041556358337402344 seconds 6: Time to load utils op: 0.0004119873046875 seconds 6: Time to load utils op: 0.00040650367736816406 seconds 6: Time to load utils op: 0.0005161762237548828 seconds 0: [2022-11-24 20:23:49,541] [INFO] [utils.py:827:see_memory_usage] before initializing group 1 0: [2022-11-24 20:23:49,542] [INFO] [utils.py:828:see_memory_usage] MA 4.24 GB Max_MA 4.24 GB CA 5.44 GB Max_CA 5 GB 0: [2022-11-24 20:23:49,542] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.02 GB, percent = 6.2% 0: [2022-11-24 20:23:49,574] [INFO] [utils.py:827:see_memory_usage] after initializing group 1 0: [2022-11-24 20:23:49,575] [INFO] [utils.py:828:see_memory_usage] MA 6.19 GB Max_MA 6.19 GB CA 8.31 GB Max_CA 8 GB 0: [2022-11-24 20:23:49,575] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.02 GB, percent = 6.2% 0: [2022-11-24 20:23:49,606] [INFO] [utils.py:827:see_memory_usage] before initializing group 2 0: [2022-11-24 20:23:49,606] [INFO] [utils.py:828:see_memory_usage] MA 6.19 GB Max_MA 6.19 GB CA 8.31 GB Max_CA 8 GB 0: [2022-11-24 20:23:49,606] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.03 GB, percent = 6.2% 0: [2022-11-24 20:23:49,641] [INFO] [utils.py:827:see_memory_usage] after initializing group 2 0: [2022-11-24 20:23:49,641] [INFO] [utils.py:828:see_memory_usage] MA 6.19 GB Max_MA 6.19 GB CA 8.31 GB Max_CA 8 GB 0: [2022-11-24 20:23:49,641] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.03 GB, percent = 6.2% 0: [2022-11-24 20:23:49,672] [INFO] [utils.py:827:see_memory_usage] before initialize_optimizer 0: [2022-11-24 20:23:49,673] [INFO] [utils.py:828:see_memory_usage] MA 6.19 GB Max_MA 6.19 GB CA 8.31 GB Max_CA 8 GB 0: [2022-11-24 20:23:49,673] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.03 GB, percent = 6.2% 0: [2022-11-24 20:23:49,709] [INFO] [utils.py:827:see_memory_usage] end initialize_optimizer 0: [2022-11-24 20:23:49,709] [INFO] [utils.py:828:see_memory_usage] MA 6.32 GB Max_MA 6.32 GB CA 8.34 GB Max_CA 8 GB 0: [2022-11-24 20:23:49,710] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.04 GB, percent = 6.2% 0: [2022-11-24 20:23:49,741] [INFO] [utils.py:827:see_memory_usage] end bf16_optimizer 0: [2022-11-24 20:23:49,741] [INFO] [utils.py:828:see_memory_usage] MA 6.32 GB Max_MA 6.32 GB CA 8.34 GB Max_CA 8 GB 0: [2022-11-24 20:23:49,741] [INFO] [utils.py:836:see_memory_usage] CPU Virtual Memory: used = 31.04 GB, percent = 6.2% 0: [2022-11-24 20:23:49,741] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed Final Optimizer = FusedAdam 0: [2022-11-24 20:23:49,742] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed using client LR scheduler 0: [2022-11-24 20:23:49,742] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed LR Scheduler = 0: [2022-11-24 20:23:49,742] [INFO] [logging.py:68:log_dist] [Rank 0] step=0, skipped=0, lr=[0.0, 0.0, 0.0], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 0: [2022-11-24 20:23:49,742] [INFO] [config.py:1007:print] DeepSpeedEngine configuration: 0: [2022-11-24 20:23:49,742] [INFO] [config.py:1011:print] activation_checkpointing_config { 0: "partition_activations": false, 0: "contiguous_memory_optimization": false, 0: "cpu_checkpointing": false, 0: "number_checkpoints": null, 0: "synchronize_checkpoint_boundary": false, 0: "profile": false 0: } 0: [2022-11-24 20:23:49,742] [INFO] [config.py:1011:print] aio_config ................... {'block_size': 1048576, 'queue_depth': 8, 'thread_count': 1, 'single_submit': False, 'overlap_events': True} 0: [2022-11-24 20:23:49,742] [INFO] [config.py:1011:print] amp_enabled .................. False 0: [2022-11-24 20:23:49,742] [INFO] [config.py:1011:print] amp_params ................... False 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] autotuning_config ............ { 0: "enabled": false, 0: "start_step": null, 0: "end_step": null, 0: "metric_path": null, 0: "arg_mappings": null, 0: "metric": "throughput", 0: "model_info": null, 0: "results_dir": "/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/autotuning_results", 0: "exps_dir": "/pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/autotuning_exps", 0: "overwrite": true, 0: "fast": true, 0: "start_profile_step": 3, 0: "end_profile_step": 5, 0: "tuner_type": "gridsearch", 0: "tuner_early_stopping": 5, 0: "tuner_num_trials": 50, 0: "model_info_path": null, 0: "mp_size": 1, 0: "max_train_batch_size": null, 0: "min_train_batch_size": 1, 0: "max_train_micro_batch_size_per_gpu": 1.024000e+03, 0: "min_train_micro_batch_size_per_gpu": 1, 0: "num_tuning_micro_batch_sizes": 3 0: } 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] bfloat16_enabled ............. True 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] checkpoint_parallel_write_pipeline False 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] checkpoint_tag_validation_enabled True 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] checkpoint_tag_validation_fail False 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] comms_config ................. 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] communication_data_type ...... None 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] compression_config ........... {'weight_quantization': {'shared_parameters': {'enabled': False, 'quantizer_kernel': False, 'schedule_offset': 0, 'quantize_groups': 1, 'quantize_verbose': False, 'quantization_type': 'symmetric', 'quantize_weight_in_forward': False, 'rounding': 'nearest', 'fp16_mixed_quantize': False, 'quantize_change_ratio': 0.001}, 'different_groups': {}}, 'activation_quantization': {'shared_parameters': {'enabled': False, 'quantization_type': 'symmetric', 'range_calibration': 'dynamic', 'schedule_offset': 1000}, 'different_groups': {}}, 'sparse_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'row_pruning': {'shared_parameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'head_pruning': {'shared_parameters': {'enabled': False, 'method': 'topk', 'schedule_offset': 1000}, 'different_groups': {}}, 'channel_pruning': {'shared_pa 0: rameters': {'enabled': False, 'method': 'l1', 'schedule_offset': 1000}, 'different_groups': {}}, 'layer_reduction': {'enabled': False}} 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] curriculum_enabled ........... False 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] curriculum_params ............ False 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] dataloader_drop_last ......... False 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] disable_allgather ............ False 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] dump_state ................... False 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] dynamic_loss_scale_args ...... None 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] eigenvalue_enabled ........... False 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] eigenvalue_gas_boundary_resolution 1 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] eigenvalue_layer_name ........ bert.encoder.layer 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] eigenvalue_layer_num ......... 0 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] eigenvalue_max_iter .......... 100 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] eigenvalue_stability ......... 1e-06 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] eigenvalue_tol ............... 0.01 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] eigenvalue_verbose ........... False 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] elasticity_enabled ........... False 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] flops_profiler_config ........ { 0: "enabled": false, 0: "profile_step": 1, 0: "module_depth": -1, 0: "top_modules": 1, 0: "detailed": true, 0: "output_file": null 0: } 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] fp16_auto_cast ............... None 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] fp16_enabled ................. False 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] fp16_master_weights_and_gradients False 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] global_rank .................. 0 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] gradient_accumulation_steps .. 1 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] gradient_clipping ............ 1.0 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] gradient_predivide_factor .... 1.0 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] initial_dynamic_scale ........ 1 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] load_universal_checkpoint .... False 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] loss_scale ................... 1.0 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] memory_breakdown ............. False 0: [2022-11-24 20:23:49,743] [INFO] [config.py:1011:print] monitor_config ............... 0: [2022-11-24 20:23:49,744] [INFO] [config.py:1011:print] nebula_config ................ { 0: "enabled": false, 0: "persistent_storage_path": null, 0: "persistent_time_interval": 100, 0: "num_of_version_in_retention": 2, 0: "enable_nebula_load": true, 0: "load_path": null 0: } 0: [2022-11-24 20:23:49,744] [INFO] [config.py:1011:print] optimizer_legacy_fusion ...... False 0: [2022-11-24 20:23:49,744] [INFO] [config.py:1011:print] optimizer_name ............... None 0: [2022-11-24 20:23:49,744] [INFO] [config.py:1011:print] optimizer_params ............. None 0: [2022-11-24 20:23:49,744] [INFO] [config.py:1011:print] pipeline ..................... {'stages': 'auto', 'partition': 'best', 'seed_layers': False, 'activation_checkpoint_interval': 0} 0: [2022-11-24 20:23:49,744] [INFO] [config.py:1011:print] pld_enabled .................. False 0: [2022-11-24 20:23:49,744] [INFO] [config.py:1011:print] pld_params ................... False 0: [2022-11-24 20:23:49,744] [INFO] [config.py:1011:print] prescale_gradients ........... False 0: [2022-11-24 20:23:49,744] [INFO] [config.py:1011:print] scheduler_name ............... None 0: [2022-11-24 20:23:49,744] [INFO] [config.py:1011:print] scheduler_params ............. None 0: [2022-11-24 20:23:49,744] [INFO] [config.py:1011:print] sparse_attention ............. None 0: [2022-11-24 20:23:49,744] [INFO] [config.py:1011:print] sparse_gradients_enabled ..... False 0: [2022-11-24 20:23:49,744] [INFO] [config.py:1011:print] steps_per_print .............. 2000 0: [2022-11-24 20:23:49,744] [INFO] [config.py:1011:print] train_batch_size ............. 256 0: [2022-11-24 20:23:49,744] [INFO] [config.py:1011:print] train_micro_batch_size_per_gpu 4 0: [2022-11-24 20:23:49,744] [INFO] [config.py:1011:print] use_node_local_storage ....... False 0: [2022-11-24 20:23:49,744] [INFO] [config.py:1011:print] wall_clock_breakdown ......... False 0: [2022-11-24 20:23:49,744] [INFO] [config.py:1011:print] world_size ................... 64 0: [2022-11-24 20:23:49,744] [INFO] [config.py:1011:print] zero_allow_untested_optimizer False 0: [2022-11-24 20:23:49,744] [INFO] [config.py:1011:print] zero_config .................. stage=0 contiguous_gradients=True reduce_scatter=True reduce_bucket_size=500000000 allgather_partitions=True allgather_bucket_size=500000000 overlap_comm=False load_from_fp32_weights=True elastic_checkpoint=False offload_param=None offload_optimizer=None sub_group_size=1000000000 cpu_offload_param=None cpu_offload_use_pin_memory=None cpu_offload=None prefetch_bucket_size=50000000 param_persistence_threshold=100000 model_persistence_threshold=9223372036854775807 max_live_parameters=1000000000 max_reuse_distance=1000000000 gather_16bit_weights_on_model_save=False stage3_gather_fp16_weights_on_model_save=False ignore_unused_parameters=True legacy_stage1=False round_robin_gradients=False 0: [2022-11-24 20:23:49,744] [INFO] [config.py:1011:print] zero_enabled ................. False 0: [2022-11-24 20:23:49,744] [INFO] [config.py:1011:print] zero_optimization_stage ...... 0 0: [2022-11-24 20:23:49,744] [INFO] [config.py:996:print_user_config] json = { 0: "train_micro_batch_size_per_gpu": 4, 0: "train_batch_size": 256, 0: "gradient_clipping": 1.0, 0: "zero_optimization": { 0: "stage": 0 0: }, 0: "bf16": { 0: "enabled": true 0: }, 0: "steps_per_print": 2.000000e+03, 0: "wall_clock_breakdown": false 0: } 0: Time to load utils op: 0.0004105567932128906 seconds 0: [2022-11-24 20:23:49,745] [INFO] [engine.py:87:__init__] CONFIG: micro_batches=1 micro_batch_size=4 0: [2022-11-24 20:23:49,766] [INFO] [engine.py:145:__init__] RANK=0 STAGE=0 LAYERS=33 [0, 33) STAGE_PARAMS=1096338432 (1096.338M) TOTAL_PARAMS=1096338432 (1096.338M) UNIQUE_PARAMS=1096338432 (1096.338M) 0: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: WARNING: could not find the metadata file checkpoints_1b1 0: [2022-11-24 20:23:49,770] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: will not load any checkpoints and will start from random 7: [2022-11-24 20:23:49,770] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2022-11-24 20:23:49,770] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2022-11-24 20:23:49,770] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2022-11-24 20:23:49,770] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2022-11-24 20:23:49,770] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2022-11-24 20:23:49,770] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2022-11-24 20:23:49,770] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 4: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 0: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 6: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 3: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 5: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 2: [2022-11-24 20:23:49,771] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2022-11-24 20:23:49,772] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 1: [2022-11-24 20:23:49,772] [WARNING] [engine.py:2581:load_checkpoint] Unable to find latest file at checkpoints_1b1/latest, if trying to load latest checkpoint please ensure this file exists or pass an explicit checkpoint tag when loading a checkpoint. 7: time (ms) | load-checkpoint: 7.68 0: estimated model parameters: 1.096338432 0: estimated model parameters without embeddings: 1.002523648 0: [after model, optimizer, and learning rate scheduler are built] datetime: 2022-11-24 20:23:50 0: > building train, validation, and test datasets ... 0: > datasets target sizes (minimum size): 0: train: 740269 0: validation: 768 0: test: 256 0: > building train, validation, and test datasets for GPT ... 0: > building dataset index ... 0: reading sizes... 0: reading pointers... 0: reading document index... 0: creating numpy buffer of mmap... 0: creating memory view of numpy buffer... 0: > finished creating indexed dataset in 0.007337 seconds 0: number of documents: 210604984 0: > dataset split: 0: train: 0: document indices in [0, 199864130) total of 199864130 documents 0: validation: 0: document indices in [199864130, 210394379) total of 10530249 documents 0: test: 0: document indices in [210394379, 210604984) total of 210605 documents 0: > WARNING: could not find index map files, building the indices on rank 0 ... 0: > only one epoch required, setting separate_last_epoch to False 0: > elasped time to build and save doc-idx mapping (seconds): 14.660523 0: using: 0: number of documents: 199864130 0: number of epochs: 1 0: sequence length: 2048 0: total number of samples: 173377816 0: > elasped time to build and save sample-idx mapping (seconds): 4.179427 0: > building shuffle index with split [0, 173377816) and [173377816, 173377816) ... 0: > elasped time to build and save shuffle-idx mapping (seconds): 10.306571 0: > loading doc-idx mapping from /scratch/project_462000119/data/pile/megatron_data/meg-gpt2_pile_text_document_train_indexmap_740269ns_2048sl_1234s_doc_idx.npy 0: > loading sample-idx mapping from /scratch/project_462000119/data/pile/megatron_data/meg-gpt2_pile_text_document_train_indexmap_740269ns_2048sl_1234s_sample_idx.npy 0: > loading shuffle-idx mapping from /scratch/project_462000119/data/pile/megatron_data/meg-gpt2_pile_text_document_train_indexmap_740269ns_2048sl_1234s_shuffle_idx.npy 0: loaded indexed file in 0.094 seconds 0: total number of samples: 173377817 0: total number of epochs: 1 0: > WARNING: could not find index map files, building the indices on rank 0 ... 0: > only one epoch required, setting separate_last_epoch to False 0: > elasped time to build and save doc-idx mapping (seconds): 0.496899 0: using: 0: number of documents: 10530249 0: number of epochs: 1 0: sequence length: 2048 0: total number of samples: 9118344 0: > elasped time to build and save sample-idx mapping (seconds): 0.216813 0: > building shuffle index with split [0, 9118344) and [9118344, 9118344) ... 0: > elasped time to build and save shuffle-idx mapping (seconds): 0.265463 0: > loading doc-idx mapping from /scratch/project_462000119/data/pile/megatron_data/meg-gpt2_pile_text_document_valid_indexmap_768ns_2048sl_1234s_doc_idx.npy 0: > loading sample-idx mapping from /scratch/project_462000119/data/pile/megatron_data/meg-gpt2_pile_text_document_valid_indexmap_768ns_2048sl_1234s_sample_idx.npy 0: > loading shuffle-idx mapping from /scratch/project_462000119/data/pile/megatron_data/meg-gpt2_pile_text_document_valid_indexmap_768ns_2048sl_1234s_shuffle_idx.npy 0: loaded indexed file in 0.039 seconds 0: total number of samples: 9118345 0: total number of epochs: 1 0: > loading doc-idx mapping from /scratch/project_462000119/data/pile/megatron_data/meg-gpt2_pile_text_document_test_indexmap_256ns_2048sl_1234s_doc_idx.npy 0: > loading sample-idx mapping from /scratch/project_462000119/data/pile/megatron_data/meg-gpt2_pile_text_document_test_indexmap_256ns_2048sl_1234s_sample_idx.npy 0: > loading shuffle-idx mapping from /scratch/project_462000119/data/pile/megatron_data/meg-gpt2_pile_text_document_test_indexmap_256ns_2048sl_1234s_shuffle_idx.npy 0: loaded indexed file in 0.068 seconds 0: total number of samples: 182928 0: total number of epochs: 1 0: > finished creating GPT datasets ... 0: [after dataloaders are built] datetime: 2022-11-24 20:24:35 0: done with setup ... 0: training ... 0: Number of parameters: [tensor rank - pipeline rank] w/ and w/o embeddings: 7: time (ms) | model-and-optimizer-setup: 17666.43 | train/valid/test-data-iterators-setup: 45499.83 0: [000-000] 1.0963B / 1.0025B 0: [before the start of training step] datetime: 2022-11-24 20:24:36 0: [Rank 0] (after 10 iterations) memory (MB) | allocated: 10138.55712890625 | max allocated: 54070.994140625 | reserved: 55702.0 | max reserved: 55702.0 7: iteration 10/ 2891 | consumed samples: 2560 | consumed tokens: 5242880 | elapsed time per iteration (s): 3.17 | learning rate: 6.916E-05 | global batch size: 256 | lm loss: 9.765865E+00 | grad norm: 2.642 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 80.797 | TFLOPs: 19.55 | 7: iteration 20/ 2891 | consumed samples: 5120 | consumed tokens: 10485760 | elapsed time per iteration (s): 1.28 | learning rate: 1.383E-04 | global batch size: 256 | lm loss: 8.146842E+00 | grad norm: 2.867 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.251 | TFLOPs: 48.46 | 7: iteration 30/ 2891 | consumed samples: 7680 | consumed tokens: 15728640 | elapsed time per iteration (s): 1.29 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 7.337408E+00 | grad norm: 0.904 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.368 | TFLOPs: 48.00 | 7: iteration 40/ 2891 | consumed samples: 10240 | consumed tokens: 20971520 | elapsed time per iteration (s): 1.29 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 7.187230E+00 | grad norm: 1.020 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.864 | TFLOPs: 47.88 | 7: iteration 50/ 2891 | consumed samples: 12800 | consumed tokens: 26214400 | elapsed time per iteration (s): 1.30 | learning rate: 2.000E-04 | global batch size: 256 | lm loss: 6.993211E+00 | grad norm: 1.010 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 196.599 | TFLOPs: 47.58 | 7: iteration 60/ 2891 | consumed samples: 15360 | consumed tokens: 31457280 | elapsed time per iteration (s): 1.27 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 6.864358E+00 | grad norm: 1.450 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.155 | TFLOPs: 48.68 | 7: iteration 70/ 2891 | consumed samples: 17920 | consumed tokens: 36700160 | elapsed time per iteration (s): 1.33 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 6.687963E+00 | grad norm: 0.927 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 191.837 | TFLOPs: 46.42 | 7: iteration 80/ 2891 | consumed samples: 20480 | consumed tokens: 41943040 | elapsed time per iteration (s): 1.32 | learning rate: 1.999E-04 | global batch size: 256 | lm loss: 6.524078E+00 | grad norm: 1.539 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 193.935 | TFLOPs: 46.93 | 7: iteration 90/ 2891 | consumed samples: 23040 | consumed tokens: 47185920 | elapsed time per iteration (s): 1.31 | learning rate: 1.998E-04 | global batch size: 256 | lm loss: 6.442695E+00 | grad norm: 0.947 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 195.661 | TFLOPs: 47.35 | 7: iteration 100/ 2891 | consumed samples: 25600 | consumed tokens: 52428800 | elapsed time per iteration (s): 1.34 | learning rate: 1.997E-04 | global batch size: 256 | lm loss: 6.283909E+00 | grad norm: 0.491 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 190.825 | TFLOPs: 46.18 | 7: iteration 110/ 2891 | consumed samples: 28160 | consumed tokens: 57671680 | elapsed time per iteration (s): 1.36 | learning rate: 1.996E-04 | global batch size: 256 | lm loss: 6.177509E+00 | grad norm: 0.893 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 188.043 | TFLOPs: 45.50 | 7: iteration 120/ 2891 | consumed samples: 30720 | consumed tokens: 62914560 | elapsed time per iteration (s): 1.31 | learning rate: 1.996E-04 | global batch size: 256 | lm loss: 6.081058E+00 | grad norm: 0.769 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 195.175 | TFLOPs: 47.23 | 7: iteration 130/ 2891 | consumed samples: 33280 | consumed tokens: 68157440 | elapsed time per iteration (s): 1.29 | learning rate: 1.994E-04 | global batch size: 256 | lm loss: 5.956179E+00 | grad norm: 0.468 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.579 | TFLOPs: 48.05 | 7: iteration 140/ 2891 | consumed samples: 35840 | consumed tokens: 73400320 | elapsed time per iteration (s): 1.33 | learning rate: 1.993E-04 | global batch size: 256 | lm loss: 5.895684E+00 | grad norm: 1.031 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 192.735 | TFLOPs: 46.64 | 7: iteration 150/ 2891 | consumed samples: 38400 | consumed tokens: 78643200 | elapsed time per iteration (s): 1.34 | learning rate: 1.992E-04 | global batch size: 256 | lm loss: 5.840030E+00 | grad norm: 0.556 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 190.486 | TFLOPs: 46.10 | 7: iteration 160/ 2891 | consumed samples: 40960 | consumed tokens: 83886080 | elapsed time per iteration (s): 1.34 | learning rate: 1.991E-04 | global batch size: 256 | lm loss: 5.732294E+00 | grad norm: 0.774 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 191.243 | TFLOPs: 46.28 | 7: iteration 170/ 2891 | consumed samples: 43520 | consumed tokens: 89128960 | elapsed time per iteration (s): 1.28 | learning rate: 1.989E-04 | global batch size: 256 | lm loss: 5.720359E+00 | grad norm: 0.544 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.358 | TFLOPs: 48.24 | 7: iteration 180/ 2891 | consumed samples: 46080 | consumed tokens: 94371840 | elapsed time per iteration (s): 1.29 | learning rate: 1.988E-04 | global batch size: 256 | lm loss: 5.621735E+00 | grad norm: 0.568 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.849 | TFLOPs: 47.88 | 7: iteration 190/ 2891 | consumed samples: 48640 | consumed tokens: 99614720 | elapsed time per iteration (s): 1.32 | learning rate: 1.986E-04 | global batch size: 256 | lm loss: 5.592916E+00 | grad norm: 0.628 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 193.723 | TFLOPs: 46.88 | 7: iteration 200/ 2891 | consumed samples: 51200 | consumed tokens: 104857600 | elapsed time per iteration (s): 1.34 | learning rate: 1.984E-04 | global batch size: 256 | lm loss: 5.548849E+00 | grad norm: 0.676 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 190.961 | TFLOPs: 46.21 | 7: iteration 210/ 2891 | consumed samples: 53760 | consumed tokens: 110100480 | elapsed time per iteration (s): 1.27 | learning rate: 1.982E-04 | global batch size: 256 | lm loss: 5.514834E+00 | grad norm: 0.624 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.962 | TFLOPs: 48.63 | 7: iteration 220/ 2891 | consumed samples: 56320 | consumed tokens: 115343360 | elapsed time per iteration (s): 1.29 | learning rate: 1.980E-04 | global batch size: 256 | lm loss: 5.464627E+00 | grad norm: 1.075 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.527 | TFLOPs: 48.04 | 7: iteration 230/ 2891 | consumed samples: 58880 | consumed tokens: 120586240 | elapsed time per iteration (s): 1.29 | learning rate: 1.978E-04 | global batch size: 256 | lm loss: 5.458317E+00 | grad norm: 0.516 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.832 | TFLOPs: 47.87 | 7: iteration 240/ 2891 | consumed samples: 61440 | consumed tokens: 125829120 | elapsed time per iteration (s): 1.34 | learning rate: 1.976E-04 | global batch size: 256 | lm loss: 5.414064E+00 | grad norm: 0.392 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 190.782 | TFLOPs: 46.17 | 7: iteration 250/ 2891 | consumed samples: 64000 | consumed tokens: 131072000 | elapsed time per iteration (s): 1.29 | learning rate: 1.974E-04 | global batch size: 256 | lm loss: 5.348185E+00 | grad norm: 0.478 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.265 | TFLOPs: 47.98 | 7: iteration 260/ 2891 | consumed samples: 66560 | consumed tokens: 136314880 | elapsed time per iteration (s): 1.31 | learning rate: 1.971E-04 | global batch size: 256 | lm loss: 5.327634E+00 | grad norm: 0.475 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 196.119 | TFLOPs: 47.46 | 7: iteration 270/ 2891 | consumed samples: 69120 | consumed tokens: 141557760 | elapsed time per iteration (s): 1.31 | learning rate: 1.969E-04 | global batch size: 256 | lm loss: 5.248993E+00 | grad norm: 0.402 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 195.331 | TFLOPs: 47.27 | 7: iteration 280/ 2891 | consumed samples: 71680 | consumed tokens: 146800640 | elapsed time per iteration (s): 1.29 | learning rate: 1.966E-04 | global batch size: 256 | lm loss: 5.236750E+00 | grad norm: 0.702 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.965 | TFLOPs: 47.91 | 7: iteration 290/ 2891 | consumed samples: 74240 | consumed tokens: 152043520 | elapsed time per iteration (s): 1.27 | learning rate: 1.963E-04 | global batch size: 256 | lm loss: 5.197227E+00 | grad norm: 0.753 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.481 | TFLOPs: 48.76 | 7: iteration 300/ 2891 | consumed samples: 76800 | consumed tokens: 157286400 | elapsed time per iteration (s): 1.29 | learning rate: 1.960E-04 | global batch size: 256 | lm loss: 5.120818E+00 | grad norm: 0.417 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.938 | TFLOPs: 47.90 | 7: iteration 310/ 2891 | consumed samples: 79360 | consumed tokens: 162529280 | elapsed time per iteration (s): 1.36 | learning rate: 1.958E-04 | global batch size: 256 | lm loss: 5.137289E+00 | grad norm: 0.910 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 188.596 | TFLOPs: 45.64 | 7: iteration 320/ 2891 | consumed samples: 81920 | consumed tokens: 167772160 | elapsed time per iteration (s): 1.31 | learning rate: 1.954E-04 | global batch size: 256 | lm loss: 5.131575E+00 | grad norm: 0.554 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 195.621 | TFLOPs: 47.34 | 7: iteration 330/ 2891 | consumed samples: 84480 | consumed tokens: 173015040 | elapsed time per iteration (s): 1.30 | learning rate: 1.951E-04 | global batch size: 256 | lm loss: 5.089932E+00 | grad norm: 0.506 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.550 | TFLOPs: 47.81 | 7: iteration 340/ 2891 | consumed samples: 87040 | consumed tokens: 178257920 | elapsed time per iteration (s): 1.32 | learning rate: 1.948E-04 | global batch size: 256 | lm loss: 5.049382E+00 | grad norm: 0.433 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 193.216 | TFLOPs: 46.76 | 7: iteration 350/ 2891 | consumed samples: 89600 | consumed tokens: 183500800 | elapsed time per iteration (s): 1.30 | learning rate: 1.945E-04 | global batch size: 256 | lm loss: 4.994262E+00 | grad norm: 0.421 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.317 | TFLOPs: 47.75 | 7: iteration 360/ 2891 | consumed samples: 92160 | consumed tokens: 188743680 | elapsed time per iteration (s): 1.29 | learning rate: 1.941E-04 | global batch size: 256 | lm loss: 4.978718E+00 | grad norm: 0.835 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.947 | TFLOPs: 48.14 | 7: iteration 370/ 2891 | consumed samples: 94720 | consumed tokens: 193986560 | elapsed time per iteration (s): 1.28 | learning rate: 1.938E-04 | global batch size: 256 | lm loss: 4.963504E+00 | grad norm: 0.410 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.884 | TFLOPs: 48.37 | 7: iteration 380/ 2891 | consumed samples: 97280 | consumed tokens: 199229440 | elapsed time per iteration (s): 1.28 | learning rate: 1.934E-04 | global batch size: 256 | lm loss: 4.920103E+00 | grad norm: 0.393 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.998 | TFLOPs: 48.40 | 7: iteration 390/ 2891 | consumed samples: 99840 | consumed tokens: 204472320 | elapsed time per iteration (s): 1.32 | learning rate: 1.930E-04 | global batch size: 256 | lm loss: 4.859517E+00 | grad norm: 0.816 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 193.480 | TFLOPs: 46.82 | 7: iteration 400/ 2891 | consumed samples: 102400 | consumed tokens: 209715200 | elapsed time per iteration (s): 1.28 | learning rate: 1.926E-04 | global batch size: 256 | lm loss: 4.870290E+00 | grad norm: 0.389 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.839 | TFLOPs: 48.36 | 7: iteration 410/ 2891 | consumed samples: 104960 | consumed tokens: 214958080 | elapsed time per iteration (s): 1.29 | learning rate: 1.922E-04 | global batch size: 256 | lm loss: 4.825578E+00 | grad norm: 0.595 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.841 | TFLOPs: 47.88 | 7: iteration 420/ 2891 | consumed samples: 107520 | consumed tokens: 220200960 | elapsed time per iteration (s): 1.34 | learning rate: 1.918E-04 | global batch size: 256 | lm loss: 4.905747E+00 | grad norm: 0.545 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 191.162 | TFLOPs: 46.26 | 7: iteration 430/ 2891 | consumed samples: 110080 | consumed tokens: 225443840 | elapsed time per iteration (s): 1.32 | learning rate: 1.914E-04 | global batch size: 256 | lm loss: 4.809043E+00 | grad norm: 0.381 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 193.730 | TFLOPs: 46.88 | 7: iteration 440/ 2891 | consumed samples: 112640 | consumed tokens: 230686720 | elapsed time per iteration (s): 1.32 | learning rate: 1.910E-04 | global batch size: 256 | lm loss: 4.734610E+00 | grad norm: 0.533 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 193.402 | TFLOPs: 46.80 | 7: iteration 450/ 2891 | consumed samples: 115200 | consumed tokens: 235929600 | elapsed time per iteration (s): 1.30 | learning rate: 1.906E-04 | global batch size: 256 | lm loss: 4.696649E+00 | grad norm: 0.631 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.219 | TFLOPs: 47.72 | 7: iteration 460/ 2891 | consumed samples: 117760 | consumed tokens: 241172480 | elapsed time per iteration (s): 1.32 | learning rate: 1.901E-04 | global batch size: 256 | lm loss: 4.686860E+00 | grad norm: 0.439 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 194.549 | TFLOPs: 47.08 | 7: iteration 470/ 2891 | consumed samples: 120320 | consumed tokens: 246415360 | elapsed time per iteration (s): 1.31 | learning rate: 1.897E-04 | global batch size: 256 | lm loss: 4.669160E+00 | grad norm: 0.597 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 195.936 | TFLOPs: 47.41 | 7: iteration 480/ 2891 | consumed samples: 122880 | consumed tokens: 251658240 | elapsed time per iteration (s): 1.27 | learning rate: 1.892E-04 | global batch size: 256 | lm loss: 4.641114E+00 | grad norm: 0.576 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.315 | TFLOPs: 48.72 | 7: iteration 490/ 2891 | consumed samples: 125440 | consumed tokens: 256901120 | elapsed time per iteration (s): 1.33 | learning rate: 1.887E-04 | global batch size: 256 | lm loss: 4.564933E+00 | grad norm: 0.487 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 192.683 | TFLOPs: 46.63 | 7: iteration 500/ 2891 | consumed samples: 128000 | consumed tokens: 262144000 | elapsed time per iteration (s): 1.28 | learning rate: 1.882E-04 | global batch size: 256 | lm loss: 4.551287E+00 | grad norm: 0.554 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.108 | TFLOPs: 48.42 | 7: iteration 510/ 2891 | consumed samples: 130560 | consumed tokens: 267386880 | elapsed time per iteration (s): 1.27 | learning rate: 1.877E-04 | global batch size: 256 | lm loss: 4.540755E+00 | grad norm: 0.559 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.055 | TFLOPs: 48.65 | 7: iteration 520/ 2891 | consumed samples: 133120 | consumed tokens: 272629760 | elapsed time per iteration (s): 1.31 | learning rate: 1.872E-04 | global batch size: 256 | lm loss: 4.483091E+00 | grad norm: 0.410 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 195.698 | TFLOPs: 47.36 | 7: iteration 530/ 2891 | consumed samples: 135680 | consumed tokens: 277872640 | elapsed time per iteration (s): 1.31 | learning rate: 1.867E-04 | global batch size: 256 | lm loss: 4.402271E+00 | grad norm: 0.623 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 195.908 | TFLOPs: 47.41 | 7: iteration 540/ 2891 | consumed samples: 138240 | consumed tokens: 283115520 | elapsed time per iteration (s): 1.30 | learning rate: 1.862E-04 | global batch size: 256 | lm loss: 4.453804E+00 | grad norm: 0.509 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 196.574 | TFLOPs: 47.57 | 7: iteration 550/ 2891 | consumed samples: 140800 | consumed tokens: 288358400 | elapsed time per iteration (s): 1.30 | learning rate: 1.857E-04 | global batch size: 256 | lm loss: 4.442201E+00 | grad norm: 0.499 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 196.722 | TFLOPs: 47.60 | 7: iteration 560/ 2891 | consumed samples: 143360 | consumed tokens: 293601280 | elapsed time per iteration (s): 1.28 | learning rate: 1.851E-04 | global batch size: 256 | lm loss: 4.416363E+00 | grad norm: 0.582 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.477 | TFLOPs: 48.27 | 7: iteration 570/ 2891 | consumed samples: 145920 | consumed tokens: 298844160 | elapsed time per iteration (s): 1.28 | learning rate: 1.846E-04 | global batch size: 256 | lm loss: 4.346658E+00 | grad norm: 0.564 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.171 | TFLOPs: 48.44 | 7: iteration 580/ 2891 | consumed samples: 148480 | consumed tokens: 304087040 | elapsed time per iteration (s): 1.29 | learning rate: 1.840E-04 | global batch size: 256 | lm loss: 4.270995E+00 | grad norm: 0.533 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.027 | TFLOPs: 48.16 | 7: iteration 590/ 2891 | consumed samples: 151040 | consumed tokens: 309329920 | elapsed time per iteration (s): 1.29 | learning rate: 1.835E-04 | global batch size: 256 | lm loss: 4.192557E+00 | grad norm: 0.726 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.289 | TFLOPs: 47.98 | 7: iteration 600/ 2891 | consumed samples: 153600 | consumed tokens: 314572800 | elapsed time per iteration (s): 1.32 | learning rate: 1.829E-04 | global batch size: 256 | lm loss: 4.170487E+00 | grad norm: 0.641 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 193.854 | TFLOPs: 46.91 | 7: iteration 610/ 2891 | consumed samples: 156160 | consumed tokens: 319815680 | elapsed time per iteration (s): 1.29 | learning rate: 1.823E-04 | global batch size: 256 | lm loss: 4.136170E+00 | grad norm: 0.491 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.938 | TFLOPs: 47.90 | 7: iteration 620/ 2891 | consumed samples: 158720 | consumed tokens: 325058560 | elapsed time per iteration (s): 1.33 | learning rate: 1.817E-04 | global batch size: 256 | lm loss: 4.055492E+00 | grad norm: 0.390 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 191.799 | TFLOPs: 46.41 | 7: iteration 630/ 2891 | consumed samples: 161280 | consumed tokens: 330301440 | elapsed time per iteration (s): 1.29 | learning rate: 1.811E-04 | global batch size: 256 | lm loss: 4.104292E+00 | grad norm: 0.508 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.016 | TFLOPs: 48.16 | 7: iteration 640/ 2891 | consumed samples: 163840 | consumed tokens: 335544320 | elapsed time per iteration (s): 1.27 | learning rate: 1.805E-04 | global batch size: 256 | lm loss: 4.017815E+00 | grad norm: 0.487 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.790 | TFLOPs: 48.59 | 7: iteration 650/ 2891 | consumed samples: 166400 | consumed tokens: 340787200 | elapsed time per iteration (s): 1.32 | learning rate: 1.799E-04 | global batch size: 256 | lm loss: 3.992767E+00 | grad norm: 0.412 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 193.918 | TFLOPs: 46.93 | 7: iteration 660/ 2891 | consumed samples: 168960 | consumed tokens: 346030080 | elapsed time per iteration (s): 1.30 | learning rate: 1.793E-04 | global batch size: 256 | lm loss: 3.913707E+00 | grad norm: 0.443 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.450 | TFLOPs: 47.78 | 7: iteration 670/ 2891 | consumed samples: 171520 | consumed tokens: 351272960 | elapsed time per iteration (s): 1.30 | learning rate: 1.786E-04 | global batch size: 256 | lm loss: 3.837637E+00 | grad norm: 0.506 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.229 | TFLOPs: 47.73 | 7: iteration 680/ 2891 | consumed samples: 174080 | consumed tokens: 356515840 | elapsed time per iteration (s): 1.34 | learning rate: 1.780E-04 | global batch size: 256 | lm loss: 3.840733E+00 | grad norm: 0.429 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 191.224 | TFLOPs: 46.27 | 7: iteration 690/ 2891 | consumed samples: 176640 | consumed tokens: 361758720 | elapsed time per iteration (s): 1.33 | learning rate: 1.773E-04 | global batch size: 256 | lm loss: 3.826342E+00 | grad norm: 0.525 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 192.659 | TFLOPs: 46.62 | 7: iteration 700/ 2891 | consumed samples: 179200 | consumed tokens: 367001600 | elapsed time per iteration (s): 1.31 | learning rate: 1.767E-04 | global batch size: 256 | lm loss: 3.802640E+00 | grad norm: 0.386 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 195.724 | TFLOPs: 47.36 | 7: iteration 710/ 2891 | consumed samples: 181760 | consumed tokens: 372244480 | elapsed time per iteration (s): 1.27 | learning rate: 1.760E-04 | global batch size: 256 | lm loss: 3.762965E+00 | grad norm: 0.394 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.604 | TFLOPs: 48.79 | 7: iteration 720/ 2891 | consumed samples: 184320 | consumed tokens: 377487360 | elapsed time per iteration (s): 1.29 | learning rate: 1.753E-04 | global batch size: 256 | lm loss: 3.796334E+00 | grad norm: 0.524 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.396 | TFLOPs: 48.01 | 7: iteration 730/ 2891 | consumed samples: 186880 | consumed tokens: 382730240 | elapsed time per iteration (s): 1.33 | learning rate: 1.747E-04 | global batch size: 256 | lm loss: 3.754204E+00 | grad norm: 0.469 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 192.067 | TFLOPs: 46.48 | 7: iteration 740/ 2891 | consumed samples: 189440 | consumed tokens: 387973120 | elapsed time per iteration (s): 1.29 | learning rate: 1.740E-04 | global batch size: 256 | lm loss: 3.693951E+00 | grad norm: 0.334 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.752 | TFLOPs: 47.85 | 7: iteration 750/ 2891 | consumed samples: 192000 | consumed tokens: 393216000 | elapsed time per iteration (s): 1.28 | learning rate: 1.733E-04 | global batch size: 256 | lm loss: 3.659017E+00 | grad norm: 0.364 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.446 | TFLOPs: 48.26 | 7: iteration 760/ 2891 | consumed samples: 194560 | consumed tokens: 398458880 | elapsed time per iteration (s): 1.30 | learning rate: 1.726E-04 | global batch size: 256 | lm loss: 3.682241E+00 | grad norm: 0.333 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.039 | TFLOPs: 47.68 | 7: iteration 770/ 2891 | consumed samples: 197120 | consumed tokens: 403701760 | elapsed time per iteration (s): 1.29 | learning rate: 1.718E-04 | global batch size: 256 | lm loss: 3.680953E+00 | grad norm: 0.372 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.135 | TFLOPs: 47.95 | 7: iteration 780/ 2891 | consumed samples: 199680 | consumed tokens: 408944640 | elapsed time per iteration (s): 1.30 | learning rate: 1.711E-04 | global batch size: 256 | lm loss: 3.615666E+00 | grad norm: 0.419 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 196.625 | TFLOPs: 47.58 | 7: iteration 790/ 2891 | consumed samples: 202240 | consumed tokens: 414187520 | elapsed time per iteration (s): 1.31 | learning rate: 1.704E-04 | global batch size: 256 | lm loss: 3.587719E+00 | grad norm: 0.367 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 195.866 | TFLOPs: 47.40 | 7: iteration 800/ 2891 | consumed samples: 204800 | consumed tokens: 419430400 | elapsed time per iteration (s): 1.29 | learning rate: 1.697E-04 | global batch size: 256 | lm loss: 3.559205E+00 | grad norm: 0.372 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.273 | TFLOPs: 47.98 | 7: iteration 810/ 2891 | consumed samples: 207360 | consumed tokens: 424673280 | elapsed time per iteration (s): 1.31 | learning rate: 1.689E-04 | global batch size: 256 | lm loss: 3.537137E+00 | grad norm: 0.346 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 196.035 | TFLOPs: 47.44 | 7: iteration 820/ 2891 | consumed samples: 209920 | consumed tokens: 429916160 | elapsed time per iteration (s): 1.30 | learning rate: 1.682E-04 | global batch size: 256 | lm loss: 3.544258E+00 | grad norm: 0.284 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 196.194 | TFLOPs: 47.48 | 7: iteration 830/ 2891 | consumed samples: 212480 | consumed tokens: 435159040 | elapsed time per iteration (s): 1.29 | learning rate: 1.674E-04 | global batch size: 256 | lm loss: 3.564616E+00 | grad norm: 0.440 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.447 | TFLOPs: 48.02 | 7: iteration 840/ 2891 | consumed samples: 215040 | consumed tokens: 440401920 | elapsed time per iteration (s): 1.33 | learning rate: 1.666E-04 | global batch size: 256 | lm loss: 3.502379E+00 | grad norm: 0.336 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 193.109 | TFLOPs: 46.73 | 7: iteration 850/ 2891 | consumed samples: 217600 | consumed tokens: 445644800 | elapsed time per iteration (s): 1.31 | learning rate: 1.659E-04 | global batch size: 256 | lm loss: 3.491462E+00 | grad norm: 0.331 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 195.419 | TFLOPs: 47.29 | 7: iteration 860/ 2891 | consumed samples: 220160 | consumed tokens: 450887680 | elapsed time per iteration (s): 1.30 | learning rate: 1.651E-04 | global batch size: 256 | lm loss: 3.473122E+00 | grad norm: 0.414 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.474 | TFLOPs: 47.79 | 7: iteration 870/ 2891 | consumed samples: 222720 | consumed tokens: 456130560 | elapsed time per iteration (s): 1.36 | learning rate: 1.643E-04 | global batch size: 256 | lm loss: 3.498838E+00 | grad norm: 0.408 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 187.592 | TFLOPs: 45.40 | 7: iteration 880/ 2891 | consumed samples: 225280 | consumed tokens: 461373440 | elapsed time per iteration (s): 1.30 | learning rate: 1.635E-04 | global batch size: 256 | lm loss: 3.505949E+00 | grad norm: 0.668 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 196.724 | TFLOPs: 47.61 | 7: iteration 890/ 2891 | consumed samples: 227840 | consumed tokens: 466616320 | elapsed time per iteration (s): 1.35 | learning rate: 1.627E-04 | global batch size: 256 | lm loss: 3.474186E+00 | grad norm: 0.334 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 190.163 | TFLOPs: 46.02 | 7: iteration 900/ 2891 | consumed samples: 230400 | consumed tokens: 471859200 | elapsed time per iteration (s): 1.28 | learning rate: 1.619E-04 | global batch size: 256 | lm loss: 3.431589E+00 | grad norm: 0.357 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.591 | TFLOPs: 48.54 | 7: iteration 910/ 2891 | consumed samples: 232960 | consumed tokens: 477102080 | elapsed time per iteration (s): 1.29 | learning rate: 1.611E-04 | global batch size: 256 | lm loss: 3.452892E+00 | grad norm: 0.346 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.501 | TFLOPs: 48.04 | 7: iteration 920/ 2891 | consumed samples: 235520 | consumed tokens: 482344960 | elapsed time per iteration (s): 1.31 | learning rate: 1.603E-04 | global batch size: 256 | lm loss: 3.445724E+00 | grad norm: 0.451 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 194.968 | TFLOPs: 47.18 | 7: iteration 930/ 2891 | consumed samples: 238080 | consumed tokens: 487587840 | elapsed time per iteration (s): 1.31 | learning rate: 1.595E-04 | global batch size: 256 | lm loss: 3.433819E+00 | grad norm: 0.383 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 195.505 | TFLOPs: 47.31 | 7: iteration 940/ 2891 | consumed samples: 240640 | consumed tokens: 492830720 | elapsed time per iteration (s): 1.34 | learning rate: 1.586E-04 | global batch size: 256 | lm loss: 3.396906E+00 | grad norm: 0.355 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 191.149 | TFLOPs: 46.26 | 7: iteration 950/ 2891 | consumed samples: 243200 | consumed tokens: 498073600 | elapsed time per iteration (s): 1.31 | learning rate: 1.578E-04 | global batch size: 256 | lm loss: 3.402228E+00 | grad norm: 0.337 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 195.917 | TFLOPs: 47.41 | 7: iteration 960/ 2891 | consumed samples: 245760 | consumed tokens: 503316480 | elapsed time per iteration (s): 1.27 | learning rate: 1.570E-04 | global batch size: 256 | lm loss: 3.365099E+00 | grad norm: 0.277 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.713 | TFLOPs: 48.81 | 7: iteration 970/ 2891 | consumed samples: 248320 | consumed tokens: 508559360 | elapsed time per iteration (s): 1.32 | learning rate: 1.561E-04 | global batch size: 256 | lm loss: 3.364809E+00 | grad norm: 0.290 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 194.108 | TFLOPs: 46.97 | 7: iteration 980/ 2891 | consumed samples: 250880 | consumed tokens: 513802240 | elapsed time per iteration (s): 1.28 | learning rate: 1.553E-04 | global batch size: 256 | lm loss: 3.314942E+00 | grad norm: 0.248 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.369 | TFLOPs: 48.49 | 7: iteration 990/ 2891 | consumed samples: 253440 | consumed tokens: 519045120 | elapsed time per iteration (s): 1.34 | learning rate: 1.544E-04 | global batch size: 256 | lm loss: 3.328453E+00 | grad norm: 0.318 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 190.922 | TFLOPs: 46.20 | 7: iteration 1000/ 2891 | consumed samples: 256000 | consumed tokens: 524288000 | elapsed time per iteration (s): 1.28 | learning rate: 1.536E-04 | global batch size: 256 | lm loss: 3.317171E+00 | grad norm: 0.328 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.136 | TFLOPs: 48.43 | 7: ------------------------------------------------------------------------------------------ 7: valid loss at iteration 1000 | lm loss value: 3.278170E+00 | lm loss PPL: 2.652719E+01 | 7: ------------------------------------------------------------------------------------------ 0: saving checkpoint at iteration 1000 to checkpoints_1b1 0: [2022-11-24 20:46:40,054] [INFO] [logging.py:68:log_dist] [Rank 0] [Torch] Checkpoint global_step1000 is begin to save! 0: [2022-11-24 20:46:40,160] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/layer_01-model_00-model_states.pt... 0: [2022-11-24 20:46:40,422] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/layer_01-model_00-model_states.pt. 0: [2022-11-24 20:46:40,423] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/layer_03-model_00-model_states.pt... 0: [2022-11-24 20:46:40,506] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/layer_03-model_00-model_states.pt. 0: [2022-11-24 20:46:40,506] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/layer_04-model_00-model_states.pt... 0: [2022-11-24 20:46:40,583] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/layer_04-model_00-model_states.pt. 0: [2022-11-24 20:46:40,583] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/layer_05-model_00-model_states.pt... 0: [2022-11-24 20:46:40,660] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/layer_05-model_00-model_states.pt. 0: [2022-11-24 20:46:40,661] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/layer_06-model_00-model_states.pt... 0: [2022-11-24 20:46:40,736] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/layer_06-model_00-model_states.pt. 0: [2022-11-24 20:46:40,737] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/layer_07-model_00-model_states.pt... 0: [2022-11-24 20:46:40,813] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/layer_07-model_00-model_states.pt. 0: [2022-11-24 20:46:40,814] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/layer_08-model_00-model_states.pt... 0: [2022-11-24 20:46:40,886] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/layer_08-model_00-model_states.pt. 0: [2022-11-24 20:46:40,887] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/layer_09-model_00-model_states.pt... 0: [2022-11-24 20:46:40,964] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/layer_09-model_00-model_states.pt. 0: [2022-11-24 20:46:40,964] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/layer_10-model_00-model_states.pt... 0: [2022-11-24 20:46:41,039] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/layer_10-model_00-model_states.pt. 0: [2022-11-24 20:46:41,040] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/layer_11-model_00-model_states.pt... 0: [2022-11-24 20:46:41,112] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/layer_11-model_00-model_states.pt. 0: [2022-11-24 20:46:41,112] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/layer_12-model_00-model_states.pt... 0: [2022-11-24 20:46:41,186] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/layer_12-model_00-model_states.pt. 0: [2022-11-24 20:46:41,186] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/layer_13-model_00-model_states.pt... 0: [2022-11-24 20:46:41,261] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/layer_13-model_00-model_states.pt. 0: [2022-11-24 20:46:41,262] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/layer_14-model_00-model_states.pt... 0: [2022-11-24 20:46:41,337] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/layer_14-model_00-model_states.pt. 0: [2022-11-24 20:46:41,338] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/layer_15-model_00-model_states.pt... 0: [2022-11-24 20:46:41,413] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/layer_15-model_00-model_states.pt. 0: [2022-11-24 20:46:41,414] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/layer_16-model_00-model_states.pt... 0: [2022-11-24 20:46:41,486] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/layer_16-model_00-model_states.pt. 0: [2022-11-24 20:46:41,486] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/layer_17-model_00-model_states.pt... 0: [2022-11-24 20:46:41,562] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/layer_17-model_00-model_states.pt. 0: [2022-11-24 20:46:41,562] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/layer_18-model_00-model_states.pt... 0: [2022-11-24 20:46:41,640] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/layer_18-model_00-model_states.pt. 0: [2022-11-24 20:46:41,640] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/layer_19-model_00-model_states.pt... 0: [2022-11-24 20:46:41,714] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/layer_19-model_00-model_states.pt. 0: [2022-11-24 20:46:41,715] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/layer_20-model_00-model_states.pt... 0: [2022-11-24 20:46:41,789] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/layer_20-model_00-model_states.pt. 0: [2022-11-24 20:46:41,789] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/layer_21-model_00-model_states.pt... 0: [2022-11-24 20:46:41,860] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/layer_21-model_00-model_states.pt. 0: [2022-11-24 20:46:41,861] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/layer_22-model_00-model_states.pt... 0: [2022-11-24 20:46:41,937] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/layer_22-model_00-model_states.pt. 0: [2022-11-24 20:46:41,938] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/layer_23-model_00-model_states.pt... 0: [2022-11-24 20:46:42,013] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/layer_23-model_00-model_states.pt. 0: [2022-11-24 20:46:42,013] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/layer_24-model_00-model_states.pt... 0: [2022-11-24 20:46:42,087] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/layer_24-model_00-model_states.pt. 0: [2022-11-24 20:46:42,087] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/layer_25-model_00-model_states.pt... 0: [2022-11-24 20:46:42,162] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/layer_25-model_00-model_states.pt. 0: [2022-11-24 20:46:42,162] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/layer_26-model_00-model_states.pt... 0: [2022-11-24 20:46:42,236] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/layer_26-model_00-model_states.pt. 0: [2022-11-24 20:46:42,236] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/layer_27-model_00-model_states.pt... 0: [2022-11-24 20:46:42,311] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/layer_27-model_00-model_states.pt. 0: [2022-11-24 20:46:42,311] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/layer_28-model_00-model_states.pt... 0: [2022-11-24 20:46:42,385] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/layer_28-model_00-model_states.pt. 0: [2022-11-24 20:46:42,385] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/layer_30-model_00-model_states.pt... 0: [2022-11-24 20:46:42,387] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/layer_30-model_00-model_states.pt. 0: [2022-11-24 20:46:42,388] [INFO] [logging.py:68:log_dist] [Rank 0] Saving model checkpoint: checkpoints_1b1/global_step1000/mp_rank_00_model_states.pt 0: [2022-11-24 20:46:42,388] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/mp_rank_00_model_states.pt... 0: [2022-11-24 20:46:42,392] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/mp_rank_00_model_states.pt. 0: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt... 0: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt... 0: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt... 5: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt... 4: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt... 4: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt... 4: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt... 4: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt... 4: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt... 4: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt... 0: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt... 1: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt... 1: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt... 1: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt... 1: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt... 6: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt... 6: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt... 6: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt... 6: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt... 6: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt... 6: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt... 3: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt... 3: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt... 3: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt... 2: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt... 2: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt... 2: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt... 2: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt... 2: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt... 7: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt... 7: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt... 7: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt... 7: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt... 7: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt... 7: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt... 4: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt... 0: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt... 0: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt... 1: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt... 1: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt... 1: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt... 6: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt... 3: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt... 3: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt... 3: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt... 3: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt... 2: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt... 2: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt... 2: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt... 7: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt... 5: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt... 5: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt... 5: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt... 5: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt... 4: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt... 0: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt... 1: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt... 6: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt... 3: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt... 7: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt... 5: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt... 0: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt... 5: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt... 5: [2022-11-24 20:46:42,413] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step1000/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt... 0: [2022-11-24 20:46:42,670] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt. 0: [2022-11-24 20:46:42,672] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt. 0: [2022-11-24 20:46:42,672] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt 0: [2022-11-24 20:46:42,672] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 0: [2022-11-24 20:46:42,673] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt. 0: [2022-11-24 20:46:42,673] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt 0: [2022-11-24 20:46:42,673] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 0: [2022-11-24 20:46:42,677] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt. 0: [2022-11-24 20:46:42,677] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt. 0: [2022-11-24 20:46:42,677] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt 0: [2022-11-24 20:46:42,677] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt 0: [2022-11-24 20:46:42,677] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 0: [2022-11-24 20:46:42,677] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 0: [2022-11-24 20:46:42,691] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt. 0: [2022-11-24 20:46:42,691] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt. 0: [2022-11-24 20:46:42,691] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt 0: [2022-11-24 20:46:42,691] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt 0: [2022-11-24 20:46:42,691] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 0: [2022-11-24 20:46:42,691] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 0: [2022-11-24 20:46:42,691] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt. 0: [2022-11-24 20:46:42,691] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt 0: [2022-11-24 20:46:42,691] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 7: [2022-11-24 20:46:42,714] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt. 7: [2022-11-24 20:46:42,714] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt 7: [2022-11-24 20:46:42,714] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 2: [2022-11-24 20:46:42,717] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt. 2: [2022-11-24 20:46:42,717] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt. 2: [2022-11-24 20:46:42,717] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt. 2: [2022-11-24 20:46:42,717] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt. 2: [2022-11-24 20:46:42,717] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt 2: [2022-11-24 20:46:42,717] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt 2: [2022-11-24 20:46:42,717] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt 2: [2022-11-24 20:46:42,717] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt 2: [2022-11-24 20:46:42,717] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 2: [2022-11-24 20:46:42,717] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 2: [2022-11-24 20:46:42,717] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 2: [2022-11-24 20:46:42,717] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 2: [2022-11-24 20:46:42,728] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt. 2: [2022-11-24 20:46:42,729] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt 2: [2022-11-24 20:46:42,729] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 2: [2022-11-24 20:46:42,736] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt. 2: [2022-11-24 20:46:42,736] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt 2: [2022-11-24 20:46:42,736] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 7: [2022-11-24 20:46:42,739] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt. 7: [2022-11-24 20:46:42,739] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt 7: [2022-11-24 20:46:42,739] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 2: [2022-11-24 20:46:42,739] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt. 2: [2022-11-24 20:46:42,739] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt 2: [2022-11-24 20:46:42,739] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 2: [2022-11-24 20:46:42,771] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt. 2: [2022-11-24 20:46:42,771] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt 2: [2022-11-24 20:46:42,771] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 6: [2022-11-24 20:46:42,773] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt. 6: [2022-11-24 20:46:42,773] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt 6: [2022-11-24 20:46:42,773] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 7: [2022-11-24 20:46:42,783] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt. 7: [2022-11-24 20:46:42,783] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt 7: [2022-11-24 20:46:42,783] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 4: [2022-11-24 20:46:42,784] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt. 4: [2022-11-24 20:46:42,784] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt. 6: [2022-11-24 20:46:42,787] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt. 6: [2022-11-24 20:46:42,787] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt 6: [2022-11-24 20:46:42,787] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 5: [2022-11-24 20:46:42,798] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt. 5: [2022-11-24 20:46:42,798] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt 5: [2022-11-24 20:46:42,798] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 6: [2022-11-24 20:46:42,815] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt. 6: [2022-11-24 20:46:42,815] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt 6: [2022-11-24 20:46:42,815] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 4: [2022-11-24 20:46:42,784] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt. 4: [2022-11-24 20:46:42,784] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt 4: [2022-11-24 20:46:42,784] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt. 4: [2022-11-24 20:46:42,784] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt 4: [2022-11-24 20:46:42,784] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 4: [2022-11-24 20:46:42,784] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt 4: [2022-11-24 20:46:42,784] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 4: [2022-11-24 20:46:42,784] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt 4: [2022-11-24 20:46:42,784] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 4: [2022-11-24 20:46:42,784] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 4: [2022-11-24 20:46:42,791] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt. 4: [2022-11-24 20:46:42,791] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt 4: [2022-11-24 20:46:42,791] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 5: [2022-11-24 20:46:42,824] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt. 5: [2022-11-24 20:46:42,824] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt 5: [2022-11-24 20:46:42,824] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 6: [2022-11-24 20:46:42,859] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt. 6: [2022-11-24 20:46:42,859] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt 6: [2022-11-24 20:46:42,859] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 4: [2022-11-24 20:46:42,864] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt. 4: [2022-11-24 20:46:42,864] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt 4: [2022-11-24 20:46:42,864] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 4: [2022-11-24 20:46:42,864] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt. 4: [2022-11-24 20:46:42,864] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt 4: [2022-11-24 20:46:42,864] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 4: [2022-11-24 20:46:42,865] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt. 4: [2022-11-24 20:46:42,865] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt 4: [2022-11-24 20:46:42,865] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 5: [2022-11-24 20:46:42,894] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt. 5: [2022-11-24 20:46:42,894] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt 5: [2022-11-24 20:46:42,894] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 5: [2022-11-24 20:46:42,905] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt. 5: [2022-11-24 20:46:42,905] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt 5: [2022-11-24 20:46:42,905] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 5: [2022-11-24 20:46:42,905] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt. 5: [2022-11-24 20:46:42,905] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt 5: [2022-11-24 20:46:42,905] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 7: [2022-11-24 20:46:42,916] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt. 7: [2022-11-24 20:46:42,916] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt 7: [2022-11-24 20:46:42,916] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 5: [2022-11-24 20:46:42,942] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt. 5: [2022-11-24 20:46:42,942] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt 5: [2022-11-24 20:46:42,942] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 5: [2022-11-24 20:46:42,949] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt. 5: [2022-11-24 20:46:42,949] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt 5: [2022-11-24 20:46:42,949] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 7: [2022-11-24 20:46:42,956] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt. 7: [2022-11-24 20:46:42,956] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt 7: [2022-11-24 20:46:42,956] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 6: [2022-11-24 20:46:42,957] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt. 6: [2022-11-24 20:46:42,957] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt 6: [2022-11-24 20:46:42,957] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 3: [2022-11-24 20:46:42,960] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt. 3: [2022-11-24 20:46:42,960] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt. 3: [2022-11-24 20:46:42,960] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt. 3: [2022-11-24 20:46:42,960] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt 3: [2022-11-24 20:46:42,960] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt 3: [2022-11-24 20:46:42,960] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt 3: [2022-11-24 20:46:42,960] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 3: [2022-11-24 20:46:42,960] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 3: [2022-11-24 20:46:42,960] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 3: [2022-11-24 20:46:42,963] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt. 3: [2022-11-24 20:46:42,963] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt 3: [2022-11-24 20:46:42,963] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 3: [2022-11-24 20:46:42,975] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt. 3: [2022-11-24 20:46:42,975] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt. 3: [2022-11-24 20:46:42,975] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt. 3: [2022-11-24 20:46:42,975] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt. 3: [2022-11-24 20:46:42,975] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt 3: [2022-11-24 20:46:42,975] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt 3: [2022-11-24 20:46:42,975] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt 3: [2022-11-24 20:46:42,975] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt 3: [2022-11-24 20:46:42,975] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 3: [2022-11-24 20:46:42,975] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 3: [2022-11-24 20:46:42,975] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 3: [2022-11-24 20:46:42,975] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 6: [2022-11-24 20:46:42,977] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt. 6: [2022-11-24 20:46:42,977] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt 6: [2022-11-24 20:46:42,977] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 7: [2022-11-24 20:46:42,992] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt. 7: [2022-11-24 20:46:42,992] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt 7: [2022-11-24 20:46:42,992] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 6: [2022-11-24 20:46:43,004] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt. 6: [2022-11-24 20:46:43,004] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt 6: [2022-11-24 20:46:43,004] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 7: [2022-11-24 20:46:43,009] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt. 7: [2022-11-24 20:46:43,009] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt 7: [2022-11-24 20:46:43,009] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 1: [2022-11-24 20:46:43,011] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt. 1: [2022-11-24 20:46:43,011] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt. 1: [2022-11-24 20:46:43,011] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt. 1: [2022-11-24 20:46:43,011] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt. 1: [2022-11-24 20:46:43,011] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt 1: [2022-11-24 20:46:43,011] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt 1: [2022-11-24 20:46:43,011] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt 1: [2022-11-24 20:46:43,011] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt 1: [2022-11-24 20:46:43,011] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 1: [2022-11-24 20:46:43,011] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 1: [2022-11-24 20:46:43,011] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 1: [2022-11-24 20:46:43,011] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 0: [2022-11-24 20:46:43,012] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt 0: [2022-11-24 20:46:43,012] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 1: [2022-11-24 20:46:43,014] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt. 1: [2022-11-24 20:46:43,014] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt. 1: [2022-11-24 20:46:43,014] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt. 1: [2022-11-24 20:46:43,014] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt 1: [2022-11-24 20:46:43,014] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt 1: [2022-11-24 20:46:43,014] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt 1: [2022-11-24 20:46:43,014] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 1: [2022-11-24 20:46:43,014] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 1: [2022-11-24 20:46:43,014] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 1: [2022-11-24 20:46:43,015] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt. 1: [2022-11-24 20:46:43,015] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt 1: [2022-11-24 20:46:43,015] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 7: [2022-11-24 20:46:43,050] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt. 7: [2022-11-24 20:46:43,050] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt 7: [2022-11-24 20:46:43,050] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 6: [2022-11-24 20:46:43,059] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt. 6: [2022-11-24 20:46:43,059] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt 6: [2022-11-24 20:46:43,059] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 5: [2022-11-24 20:46:43,086] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt. 5: [2022-11-24 20:46:43,087] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step1000/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt 5: [2022-11-24 20:46:43,087] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step1000 is ready now! 0: successfully saved checkpoint at iteration 1000 to checkpoints_1b1 7: time (ms) | save-checkpoint: 3040.36 7: iteration 1010/ 2891 | consumed samples: 258560 | consumed tokens: 529530880 | elapsed time per iteration (s): 1.63 | learning rate: 1.527E-04 | global batch size: 256 | lm loss: 3.343962E+00 | grad norm: 0.261 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 157.381 | TFLOPs: 38.08 | 7: iteration 1020/ 2891 | consumed samples: 261120 | consumed tokens: 534773760 | elapsed time per iteration (s): 1.27 | learning rate: 1.518E-04 | global batch size: 256 | lm loss: 3.303916E+00 | grad norm: 0.258 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.840 | TFLOPs: 48.84 | 7: iteration 1030/ 2891 | consumed samples: 263680 | consumed tokens: 540016640 | elapsed time per iteration (s): 1.27 | learning rate: 1.509E-04 | global batch size: 256 | lm loss: 3.294590E+00 | grad norm: 0.280 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.908 | TFLOPs: 48.86 | 7: iteration 1040/ 2891 | consumed samples: 266240 | consumed tokens: 545259520 | elapsed time per iteration (s): 1.28 | learning rate: 1.501E-04 | global batch size: 256 | lm loss: 3.344618E+00 | grad norm: 0.379 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.636 | TFLOPs: 48.31 | 7: iteration 1050/ 2891 | consumed samples: 268800 | consumed tokens: 550502400 | elapsed time per iteration (s): 1.27 | learning rate: 1.492E-04 | global batch size: 256 | lm loss: 3.318086E+00 | grad norm: 0.359 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.480 | TFLOPs: 48.76 | 7: iteration 1060/ 2891 | consumed samples: 271360 | consumed tokens: 555745280 | elapsed time per iteration (s): 1.26 | learning rate: 1.483E-04 | global batch size: 256 | lm loss: 3.272342E+00 | grad norm: 0.287 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 203.079 | TFLOPs: 49.14 | 7: iteration 1070/ 2891 | consumed samples: 273920 | consumed tokens: 560988160 | elapsed time per iteration (s): 1.28 | learning rate: 1.474E-04 | global batch size: 256 | lm loss: 3.262115E+00 | grad norm: 0.260 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.291 | TFLOPs: 48.47 | 7: iteration 1080/ 2891 | consumed samples: 276480 | consumed tokens: 566231040 | elapsed time per iteration (s): 1.30 | learning rate: 1.465E-04 | global batch size: 256 | lm loss: 3.210053E+00 | grad norm: 0.319 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.457 | TFLOPs: 47.78 | 7: iteration 1090/ 2891 | consumed samples: 279040 | consumed tokens: 571473920 | elapsed time per iteration (s): 1.27 | learning rate: 1.456E-04 | global batch size: 256 | lm loss: 3.242656E+00 | grad norm: 0.336 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.319 | TFLOPs: 48.96 | 7: iteration 1100/ 2891 | consumed samples: 281600 | consumed tokens: 576716800 | elapsed time per iteration (s): 1.27 | learning rate: 1.447E-04 | global batch size: 256 | lm loss: 3.271250E+00 | grad norm: 0.298 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.193 | TFLOPs: 48.93 | 7: iteration 1110/ 2891 | consumed samples: 284160 | consumed tokens: 581959680 | elapsed time per iteration (s): 1.26 | learning rate: 1.438E-04 | global batch size: 256 | lm loss: 3.224141E+00 | grad norm: 0.274 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.620 | TFLOPs: 49.03 | 7: iteration 1120/ 2891 | consumed samples: 286720 | consumed tokens: 587202560 | elapsed time per iteration (s): 1.27 | learning rate: 1.428E-04 | global batch size: 256 | lm loss: 3.251974E+00 | grad norm: 0.297 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.598 | TFLOPs: 48.78 | 7: iteration 1130/ 2891 | consumed samples: 289280 | consumed tokens: 592445440 | elapsed time per iteration (s): 1.29 | learning rate: 1.419E-04 | global batch size: 256 | lm loss: 3.254908E+00 | grad norm: 0.368 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.146 | TFLOPs: 48.19 | 7: iteration 1140/ 2891 | consumed samples: 291840 | consumed tokens: 597688320 | elapsed time per iteration (s): 1.28 | learning rate: 1.410E-04 | global batch size: 256 | lm loss: 3.207593E+00 | grad norm: 0.268 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.753 | TFLOPs: 48.34 | 7: iteration 1150/ 2891 | consumed samples: 294400 | consumed tokens: 602931200 | elapsed time per iteration (s): 1.28 | learning rate: 1.401E-04 | global batch size: 256 | lm loss: 3.237651E+00 | grad norm: 0.346 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.528 | TFLOPs: 48.53 | 7: iteration 1160/ 2891 | consumed samples: 296960 | consumed tokens: 608174080 | elapsed time per iteration (s): 1.27 | learning rate: 1.391E-04 | global batch size: 256 | lm loss: 3.218523E+00 | grad norm: 0.305 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.106 | TFLOPs: 48.91 | 7: iteration 1170/ 2891 | consumed samples: 299520 | consumed tokens: 613416960 | elapsed time per iteration (s): 1.32 | learning rate: 1.382E-04 | global batch size: 256 | lm loss: 3.215828E+00 | grad norm: 0.308 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 194.426 | TFLOPs: 47.05 | 7: iteration 1180/ 2891 | consumed samples: 302080 | consumed tokens: 618659840 | elapsed time per iteration (s): 1.28 | learning rate: 1.372E-04 | global batch size: 256 | lm loss: 3.227235E+00 | grad norm: 0.312 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.637 | TFLOPs: 48.55 | 7: iteration 1190/ 2891 | consumed samples: 304640 | consumed tokens: 623902720 | elapsed time per iteration (s): 1.28 | learning rate: 1.363E-04 | global batch size: 256 | lm loss: 3.230080E+00 | grad norm: 0.278 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.825 | TFLOPs: 48.36 | 7: iteration 1200/ 2891 | consumed samples: 307200 | consumed tokens: 629145600 | elapsed time per iteration (s): 1.27 | learning rate: 1.354E-04 | global batch size: 256 | lm loss: 3.197777E+00 | grad norm: 0.296 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.552 | TFLOPs: 48.77 | 7: iteration 1210/ 2891 | consumed samples: 309760 | consumed tokens: 634388480 | elapsed time per iteration (s): 1.29 | learning rate: 1.344E-04 | global batch size: 256 | lm loss: 3.187337E+00 | grad norm: 0.327 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.459 | TFLOPs: 48.03 | 7: iteration 1220/ 2891 | consumed samples: 312320 | consumed tokens: 639631360 | elapsed time per iteration (s): 1.29 | learning rate: 1.335E-04 | global batch size: 256 | lm loss: 3.127047E+00 | grad norm: 0.288 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.594 | TFLOPs: 48.06 | 7: iteration 1230/ 2891 | consumed samples: 314880 | consumed tokens: 644874240 | elapsed time per iteration (s): 1.28 | learning rate: 1.325E-04 | global batch size: 256 | lm loss: 3.181544E+00 | grad norm: 0.249 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.105 | TFLOPs: 48.42 | 7: iteration 1240/ 2891 | consumed samples: 317440 | consumed tokens: 650117120 | elapsed time per iteration (s): 1.30 | learning rate: 1.315E-04 | global batch size: 256 | lm loss: 3.154024E+00 | grad norm: 0.280 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.355 | TFLOPs: 47.76 | 7: iteration 1250/ 2891 | consumed samples: 320000 | consumed tokens: 655360000 | elapsed time per iteration (s): 1.29 | learning rate: 1.306E-04 | global batch size: 256 | lm loss: 3.156109E+00 | grad norm: 0.305 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.078 | TFLOPs: 47.93 | 7: iteration 1260/ 2891 | consumed samples: 322560 | consumed tokens: 660602880 | elapsed time per iteration (s): 1.27 | learning rate: 1.296E-04 | global batch size: 256 | lm loss: 3.096645E+00 | grad norm: 0.323 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.780 | TFLOPs: 48.83 | 7: iteration 1270/ 2891 | consumed samples: 325120 | consumed tokens: 665845760 | elapsed time per iteration (s): 1.28 | learning rate: 1.287E-04 | global batch size: 256 | lm loss: 3.159548E+00 | grad norm: 0.288 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.403 | TFLOPs: 48.25 | 7: iteration 1280/ 2891 | consumed samples: 327680 | consumed tokens: 671088640 | elapsed time per iteration (s): 1.29 | learning rate: 1.277E-04 | global batch size: 256 | lm loss: 3.158171E+00 | grad norm: 0.341 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.171 | TFLOPs: 48.20 | 7: iteration 1290/ 2891 | consumed samples: 330240 | consumed tokens: 676331520 | elapsed time per iteration (s): 1.27 | learning rate: 1.267E-04 | global batch size: 256 | lm loss: 3.041705E+00 | grad norm: 0.261 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.272 | TFLOPs: 48.71 | 7: iteration 1300/ 2891 | consumed samples: 332800 | consumed tokens: 681574400 | elapsed time per iteration (s): 1.28 | learning rate: 1.258E-04 | global batch size: 256 | lm loss: 3.118959E+00 | grad norm: 0.324 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.440 | TFLOPs: 48.50 | 7: iteration 1310/ 2891 | consumed samples: 335360 | consumed tokens: 686817280 | elapsed time per iteration (s): 1.27 | learning rate: 1.248E-04 | global batch size: 256 | lm loss: 3.098573E+00 | grad norm: 0.285 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.799 | TFLOPs: 48.59 | 7: iteration 1320/ 2891 | consumed samples: 337920 | consumed tokens: 692060160 | elapsed time per iteration (s): 1.27 | learning rate: 1.238E-04 | global batch size: 256 | lm loss: 3.096959E+00 | grad norm: 0.352 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.273 | TFLOPs: 48.71 | 7: iteration 1330/ 2891 | consumed samples: 340480 | consumed tokens: 697303040 | elapsed time per iteration (s): 1.28 | learning rate: 1.228E-04 | global batch size: 256 | lm loss: 3.100436E+00 | grad norm: 0.271 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.779 | TFLOPs: 48.34 | 7: iteration 1340/ 2891 | consumed samples: 343040 | consumed tokens: 702545920 | elapsed time per iteration (s): 1.27 | learning rate: 1.218E-04 | global batch size: 256 | lm loss: 3.109594E+00 | grad norm: 0.309 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.820 | TFLOPs: 48.84 | 7: iteration 1350/ 2891 | consumed samples: 345600 | consumed tokens: 707788800 | elapsed time per iteration (s): 1.27 | learning rate: 1.209E-04 | global batch size: 256 | lm loss: 3.084716E+00 | grad norm: 0.256 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.204 | TFLOPs: 48.69 | 7: iteration 1360/ 2891 | consumed samples: 348160 | consumed tokens: 713031680 | elapsed time per iteration (s): 1.29 | learning rate: 1.199E-04 | global batch size: 256 | lm loss: 3.048441E+00 | grad norm: 0.309 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.193 | TFLOPs: 47.96 | 7: iteration 1370/ 2891 | consumed samples: 350720 | consumed tokens: 718274560 | elapsed time per iteration (s): 1.28 | learning rate: 1.189E-04 | global batch size: 256 | lm loss: 3.085259E+00 | grad norm: 0.324 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.007 | TFLOPs: 48.40 | 7: iteration 1380/ 2891 | consumed samples: 353280 | consumed tokens: 723517440 | elapsed time per iteration (s): 1.29 | learning rate: 1.179E-04 | global batch size: 256 | lm loss: 3.080657E+00 | grad norm: 0.315 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.075 | TFLOPs: 47.93 | 7: iteration 1390/ 2891 | consumed samples: 355840 | consumed tokens: 728760320 | elapsed time per iteration (s): 1.29 | learning rate: 1.169E-04 | global batch size: 256 | lm loss: 3.076471E+00 | grad norm: 0.294 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.299 | TFLOPs: 47.99 | 7: iteration 1400/ 2891 | consumed samples: 358400 | consumed tokens: 734003200 | elapsed time per iteration (s): 1.27 | learning rate: 1.160E-04 | global batch size: 256 | lm loss: 3.115252E+00 | grad norm: 0.250 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.332 | TFLOPs: 48.72 | 7: iteration 1410/ 2891 | consumed samples: 360960 | consumed tokens: 739246080 | elapsed time per iteration (s): 1.27 | learning rate: 1.150E-04 | global batch size: 256 | lm loss: 3.080489E+00 | grad norm: 0.280 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.687 | TFLOPs: 48.81 | 7: iteration 1420/ 2891 | consumed samples: 363520 | consumed tokens: 744488960 | elapsed time per iteration (s): 1.28 | learning rate: 1.140E-04 | global batch size: 256 | lm loss: 3.029892E+00 | grad norm: 0.269 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.737 | TFLOPs: 48.33 | 7: iteration 1430/ 2891 | consumed samples: 366080 | consumed tokens: 749731840 | elapsed time per iteration (s): 1.27 | learning rate: 1.130E-04 | global batch size: 256 | lm loss: 3.031248E+00 | grad norm: 0.285 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.616 | TFLOPs: 48.79 | 7: iteration 1440/ 2891 | consumed samples: 368640 | consumed tokens: 754974720 | elapsed time per iteration (s): 1.26 | learning rate: 1.120E-04 | global batch size: 256 | lm loss: 3.061755E+00 | grad norm: 0.275 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.804 | TFLOPs: 49.08 | 7: iteration 1450/ 2891 | consumed samples: 371200 | consumed tokens: 760217600 | elapsed time per iteration (s): 1.29 | learning rate: 1.110E-04 | global batch size: 256 | lm loss: 3.049806E+00 | grad norm: 0.267 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.081 | TFLOPs: 48.18 | 7: iteration 1460/ 2891 | consumed samples: 373760 | consumed tokens: 765460480 | elapsed time per iteration (s): 1.28 | learning rate: 1.100E-04 | global batch size: 256 | lm loss: 3.061333E+00 | grad norm: 0.270 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.315 | TFLOPs: 48.47 | 7: iteration 1470/ 2891 | consumed samples: 376320 | consumed tokens: 770703360 | elapsed time per iteration (s): 1.29 | learning rate: 1.090E-04 | global batch size: 256 | lm loss: 3.027428E+00 | grad norm: 0.283 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.854 | TFLOPs: 48.12 | 7: iteration 1480/ 2891 | consumed samples: 378880 | consumed tokens: 775946240 | elapsed time per iteration (s): 1.27 | learning rate: 1.081E-04 | global batch size: 256 | lm loss: 3.047362E+00 | grad norm: 0.278 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.073 | TFLOPs: 48.90 | 7: iteration 1490/ 2891 | consumed samples: 381440 | consumed tokens: 781189120 | elapsed time per iteration (s): 1.29 | learning rate: 1.071E-04 | global batch size: 256 | lm loss: 3.060497E+00 | grad norm: 0.263 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.724 | TFLOPs: 47.85 | 7: iteration 1500/ 2891 | consumed samples: 384000 | consumed tokens: 786432000 | elapsed time per iteration (s): 1.29 | learning rate: 1.061E-04 | global batch size: 256 | lm loss: 3.034048E+00 | grad norm: 0.288 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.203 | TFLOPs: 48.21 | 7: iteration 1510/ 2891 | consumed samples: 386560 | consumed tokens: 791674880 | elapsed time per iteration (s): 1.28 | learning rate: 1.051E-04 | global batch size: 256 | lm loss: 3.035198E+00 | grad norm: 0.260 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.262 | TFLOPs: 48.22 | 7: iteration 1520/ 2891 | consumed samples: 389120 | consumed tokens: 796917760 | elapsed time per iteration (s): 1.27 | learning rate: 1.041E-04 | global batch size: 256 | lm loss: 3.031193E+00 | grad norm: 0.308 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.156 | TFLOPs: 48.68 | 7: iteration 1530/ 2891 | consumed samples: 391680 | consumed tokens: 802160640 | elapsed time per iteration (s): 1.30 | learning rate: 1.031E-04 | global batch size: 256 | lm loss: 3.019461E+00 | grad norm: 0.285 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 196.566 | TFLOPs: 47.57 | 7: iteration 1540/ 2891 | consumed samples: 394240 | consumed tokens: 807403520 | elapsed time per iteration (s): 1.29 | learning rate: 1.021E-04 | global batch size: 256 | lm loss: 3.035143E+00 | grad norm: 0.282 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.773 | TFLOPs: 48.10 | 7: iteration 1550/ 2891 | consumed samples: 396800 | consumed tokens: 812646400 | elapsed time per iteration (s): 1.28 | learning rate: 1.012E-04 | global batch size: 256 | lm loss: 3.008830E+00 | grad norm: 0.279 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.600 | TFLOPs: 48.30 | 7: iteration 1560/ 2891 | consumed samples: 399360 | consumed tokens: 817889280 | elapsed time per iteration (s): 1.27 | learning rate: 1.002E-04 | global batch size: 256 | lm loss: 2.989631E+00 | grad norm: 0.283 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.113 | TFLOPs: 48.91 | 7: iteration 1570/ 2891 | consumed samples: 401920 | consumed tokens: 823132160 | elapsed time per iteration (s): 1.26 | learning rate: 9.919E-05 | global batch size: 256 | lm loss: 2.980611E+00 | grad norm: 0.266 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.434 | TFLOPs: 48.99 | 7: iteration 1580/ 2891 | consumed samples: 404480 | consumed tokens: 828375040 | elapsed time per iteration (s): 1.26 | learning rate: 9.821E-05 | global batch size: 256 | lm loss: 3.008602E+00 | grad norm: 0.259 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.598 | TFLOPs: 49.03 | 7: iteration 1590/ 2891 | consumed samples: 407040 | consumed tokens: 833617920 | elapsed time per iteration (s): 1.26 | learning rate: 9.723E-05 | global batch size: 256 | lm loss: 3.012232E+00 | grad norm: 0.267 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.457 | TFLOPs: 48.99 | 7: iteration 1600/ 2891 | consumed samples: 409600 | consumed tokens: 838860800 | elapsed time per iteration (s): 1.29 | learning rate: 9.626E-05 | global batch size: 256 | lm loss: 2.981424E+00 | grad norm: 0.259 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.103 | TFLOPs: 48.18 | 7: iteration 1610/ 2891 | consumed samples: 412160 | consumed tokens: 844103680 | elapsed time per iteration (s): 1.27 | learning rate: 9.528E-05 | global batch size: 256 | lm loss: 2.965554E+00 | grad norm: 0.291 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.943 | TFLOPs: 48.87 | 7: iteration 1620/ 2891 | consumed samples: 414720 | consumed tokens: 849346560 | elapsed time per iteration (s): 1.27 | learning rate: 9.431E-05 | global batch size: 256 | lm loss: 2.965051E+00 | grad norm: 0.270 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.491 | TFLOPs: 48.76 | 7: iteration 1630/ 2891 | consumed samples: 417280 | consumed tokens: 854589440 | elapsed time per iteration (s): 1.28 | learning rate: 9.334E-05 | global batch size: 256 | lm loss: 2.978099E+00 | grad norm: 0.261 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.896 | TFLOPs: 48.37 | 7: iteration 1640/ 2891 | consumed samples: 419840 | consumed tokens: 859832320 | elapsed time per iteration (s): 1.27 | learning rate: 9.237E-05 | global batch size: 256 | lm loss: 2.953215E+00 | grad norm: 0.277 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.827 | TFLOPs: 48.84 | 7: iteration 1650/ 2891 | consumed samples: 422400 | consumed tokens: 865075200 | elapsed time per iteration (s): 1.28 | learning rate: 9.140E-05 | global batch size: 256 | lm loss: 3.011898E+00 | grad norm: 0.271 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.743 | TFLOPs: 48.58 | 7: iteration 1660/ 2891 | consumed samples: 424960 | consumed tokens: 870318080 | elapsed time per iteration (s): 1.28 | learning rate: 9.043E-05 | global batch size: 256 | lm loss: 2.940250E+00 | grad norm: 0.292 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.596 | TFLOPs: 48.30 | 7: iteration 1670/ 2891 | consumed samples: 427520 | consumed tokens: 875560960 | elapsed time per iteration (s): 1.27 | learning rate: 8.947E-05 | global batch size: 256 | lm loss: 2.973220E+00 | grad norm: 0.253 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.531 | TFLOPs: 48.77 | 7: iteration 1680/ 2891 | consumed samples: 430080 | consumed tokens: 880803840 | elapsed time per iteration (s): 1.29 | learning rate: 8.851E-05 | global batch size: 256 | lm loss: 2.948690E+00 | grad norm: 0.283 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.792 | TFLOPs: 48.11 | 7: iteration 1690/ 2891 | consumed samples: 432640 | consumed tokens: 886046720 | elapsed time per iteration (s): 1.28 | learning rate: 8.755E-05 | global batch size: 256 | lm loss: 2.996978E+00 | grad norm: 0.279 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.272 | TFLOPs: 48.46 | 7: iteration 1700/ 2891 | consumed samples: 435200 | consumed tokens: 891289600 | elapsed time per iteration (s): 1.26 | learning rate: 8.660E-05 | global batch size: 256 | lm loss: 2.990174E+00 | grad norm: 0.277 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.748 | TFLOPs: 49.06 | 7: iteration 1710/ 2891 | consumed samples: 437760 | consumed tokens: 896532480 | elapsed time per iteration (s): 1.28 | learning rate: 8.565E-05 | global batch size: 256 | lm loss: 2.959204E+00 | grad norm: 0.292 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.549 | TFLOPs: 48.29 | 7: iteration 1720/ 2891 | consumed samples: 440320 | consumed tokens: 901775360 | elapsed time per iteration (s): 1.28 | learning rate: 8.470E-05 | global batch size: 256 | lm loss: 2.953564E+00 | grad norm: 0.260 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.448 | TFLOPs: 48.26 | 7: iteration 1730/ 2891 | consumed samples: 442880 | consumed tokens: 907018240 | elapsed time per iteration (s): 1.27 | learning rate: 8.375E-05 | global batch size: 256 | lm loss: 2.944229E+00 | grad norm: 0.267 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.498 | TFLOPs: 48.76 | 7: iteration 1740/ 2891 | consumed samples: 445440 | consumed tokens: 912261120 | elapsed time per iteration (s): 1.27 | learning rate: 8.281E-05 | global batch size: 256 | lm loss: 2.979383E+00 | grad norm: 0.252 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.580 | TFLOPs: 48.78 | 7: iteration 1750/ 2891 | consumed samples: 448000 | consumed tokens: 917504000 | elapsed time per iteration (s): 1.27 | learning rate: 8.187E-05 | global batch size: 256 | lm loss: 2.953689E+00 | grad norm: 0.283 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.279 | TFLOPs: 48.71 | 7: iteration 1760/ 2891 | consumed samples: 450560 | consumed tokens: 922746880 | elapsed time per iteration (s): 1.29 | learning rate: 8.093E-05 | global batch size: 256 | lm loss: 2.963270E+00 | grad norm: 0.284 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.123 | TFLOPs: 48.19 | 7: iteration 1770/ 2891 | consumed samples: 453120 | consumed tokens: 927989760 | elapsed time per iteration (s): 1.27 | learning rate: 8.000E-05 | global batch size: 256 | lm loss: 2.926622E+00 | grad norm: 0.286 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.467 | TFLOPs: 48.75 | 7: iteration 1780/ 2891 | consumed samples: 455680 | consumed tokens: 933232640 | elapsed time per iteration (s): 1.28 | learning rate: 7.907E-05 | global batch size: 256 | lm loss: 2.916380E+00 | grad norm: 0.265 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.919 | TFLOPs: 48.38 | 7: iteration 1790/ 2891 | consumed samples: 458240 | consumed tokens: 938475520 | elapsed time per iteration (s): 1.27 | learning rate: 7.814E-05 | global batch size: 256 | lm loss: 2.971655E+00 | grad norm: 0.278 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.033 | TFLOPs: 48.65 | 7: iteration 1800/ 2891 | consumed samples: 460800 | consumed tokens: 943718400 | elapsed time per iteration (s): 1.27 | learning rate: 7.722E-05 | global batch size: 256 | lm loss: 2.912868E+00 | grad norm: 0.240 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.188 | TFLOPs: 48.69 | 7: iteration 1810/ 2891 | consumed samples: 463360 | consumed tokens: 948961280 | elapsed time per iteration (s): 1.28 | learning rate: 7.630E-05 | global batch size: 256 | lm loss: 2.943017E+00 | grad norm: 0.261 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.544 | TFLOPs: 48.53 | 7: iteration 1820/ 2891 | consumed samples: 465920 | consumed tokens: 954204160 | elapsed time per iteration (s): 1.27 | learning rate: 7.539E-05 | global batch size: 256 | lm loss: 2.924209E+00 | grad norm: 0.250 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.329 | TFLOPs: 48.96 | 7: iteration 1830/ 2891 | consumed samples: 468480 | consumed tokens: 959447040 | elapsed time per iteration (s): 1.28 | learning rate: 7.448E-05 | global batch size: 256 | lm loss: 2.913163E+00 | grad norm: 0.287 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.035 | TFLOPs: 48.41 | 7: iteration 1840/ 2891 | consumed samples: 471040 | consumed tokens: 964689920 | elapsed time per iteration (s): 1.29 | learning rate: 7.357E-05 | global batch size: 256 | lm loss: 2.913443E+00 | grad norm: 0.272 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.220 | TFLOPs: 47.97 | 7: iteration 1850/ 2891 | consumed samples: 473600 | consumed tokens: 969932800 | elapsed time per iteration (s): 1.29 | learning rate: 7.267E-05 | global batch size: 256 | lm loss: 2.921156E+00 | grad norm: 0.270 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.062 | TFLOPs: 47.93 | 7: iteration 1860/ 2891 | consumed samples: 476160 | consumed tokens: 975175680 | elapsed time per iteration (s): 1.28 | learning rate: 7.178E-05 | global batch size: 256 | lm loss: 2.882126E+00 | grad norm: 0.252 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.919 | TFLOPs: 48.38 | 7: iteration 1870/ 2891 | consumed samples: 478720 | consumed tokens: 980418560 | elapsed time per iteration (s): 1.29 | learning rate: 7.088E-05 | global batch size: 256 | lm loss: 2.893600E+00 | grad norm: 0.269 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.662 | TFLOPs: 48.07 | 7: iteration 1880/ 2891 | consumed samples: 481280 | consumed tokens: 985661440 | elapsed time per iteration (s): 1.27 | learning rate: 7.000E-05 | global batch size: 256 | lm loss: 2.874181E+00 | grad norm: 0.255 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.771 | TFLOPs: 48.83 | 7: iteration 1890/ 2891 | consumed samples: 483840 | consumed tokens: 990904320 | elapsed time per iteration (s): 1.28 | learning rate: 6.912E-05 | global batch size: 256 | lm loss: 2.923892E+00 | grad norm: 0.305 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.665 | TFLOPs: 48.56 | 7: iteration 1900/ 2891 | consumed samples: 486400 | consumed tokens: 996147200 | elapsed time per iteration (s): 1.28 | learning rate: 6.824E-05 | global batch size: 256 | lm loss: 2.861729E+00 | grad norm: 0.260 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.733 | TFLOPs: 48.58 | 7: iteration 1910/ 2891 | consumed samples: 488960 | consumed tokens: 1001390080 | elapsed time per iteration (s): 1.27 | learning rate: 6.737E-05 | global batch size: 256 | lm loss: 2.917368E+00 | grad norm: 0.247 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.737 | TFLOPs: 48.82 | 7: iteration 1920/ 2891 | consumed samples: 491520 | consumed tokens: 1006632960 | elapsed time per iteration (s): 1.27 | learning rate: 6.650E-05 | global batch size: 256 | lm loss: 2.881069E+00 | grad norm: 0.273 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.453 | TFLOPs: 48.75 | 7: iteration 1930/ 2891 | consumed samples: 494080 | consumed tokens: 1011875840 | elapsed time per iteration (s): 1.27 | learning rate: 6.564E-05 | global batch size: 256 | lm loss: 2.894638E+00 | grad norm: 0.251 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.454 | TFLOPs: 48.75 | 7: iteration 1940/ 2891 | consumed samples: 496640 | consumed tokens: 1017118720 | elapsed time per iteration (s): 1.26 | learning rate: 6.478E-05 | global batch size: 256 | lm loss: 2.885247E+00 | grad norm: 0.288 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 203.584 | TFLOPs: 49.27 | 7: iteration 1950/ 2891 | consumed samples: 499200 | consumed tokens: 1022361600 | elapsed time per iteration (s): 1.26 | learning rate: 6.393E-05 | global batch size: 256 | lm loss: 2.863762E+00 | grad norm: 0.268 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 203.102 | TFLOPs: 49.15 | 7: iteration 1960/ 2891 | consumed samples: 501760 | consumed tokens: 1027604480 | elapsed time per iteration (s): 1.29 | learning rate: 6.308E-05 | global batch size: 256 | lm loss: 2.919002E+00 | grad norm: 0.244 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.463 | TFLOPs: 48.03 | 7: iteration 1970/ 2891 | consumed samples: 504320 | consumed tokens: 1032847360 | elapsed time per iteration (s): 1.28 | learning rate: 6.224E-05 | global batch size: 256 | lm loss: 2.918430E+00 | grad norm: 0.249 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.142 | TFLOPs: 48.43 | 7: iteration 1980/ 2891 | consumed samples: 506880 | consumed tokens: 1038090240 | elapsed time per iteration (s): 1.26 | learning rate: 6.141E-05 | global batch size: 256 | lm loss: 2.889674E+00 | grad norm: 0.252 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.387 | TFLOPs: 48.98 | 7: iteration 1990/ 2891 | consumed samples: 509440 | consumed tokens: 1043333120 | elapsed time per iteration (s): 1.26 | learning rate: 6.058E-05 | global batch size: 256 | lm loss: 2.917491E+00 | grad norm: 0.262 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 203.177 | TFLOPs: 49.17 | 0: [2022-11-24 21:08:00,155] [INFO] [logging.py:68:log_dist] [Rank 0] step=2000, skipped=0, lr=[5.9757828883278194e-05, 5.9757828883278194e-05, 5.9757828883278194e-05], mom=[(0.9, 0.999), (0.9, 0.999), (0.9, 0.999)] 7: iteration 2000/ 2891 | consumed samples: 512000 | consumed tokens: 1048576000 | elapsed time per iteration (s): 1.27 | learning rate: 5.976E-05 | global batch size: 256 | lm loss: 2.907642E+00 | grad norm: 0.260 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.935 | TFLOPs: 48.62 | 0: steps: 2000 loss: 2.9851 iter time (s): 1.297 samples/sec: 197.319 7: ------------------------------------------------------------------------------------------ 7: valid loss at iteration 2000 | lm loss value: 2.796879E+00 | lm loss PPL: 1.639341E+01 | 7: ------------------------------------------------------------------------------------------ 0: saving checkpoint at iteration 2000 to checkpoints_1b1 0: [2022-11-24 21:08:00,577] [INFO] [logging.py:68:log_dist] [Rank 0] [Torch] Checkpoint global_step2000 is begin to save! 0: [2022-11-24 21:08:00,581] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/layer_01-model_00-model_states.pt... 0: [2022-11-24 21:08:00,773] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/layer_01-model_00-model_states.pt. 0: [2022-11-24 21:08:00,773] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/layer_03-model_00-model_states.pt... 0: [2022-11-24 21:08:00,851] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/layer_03-model_00-model_states.pt. 0: [2022-11-24 21:08:00,852] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/layer_04-model_00-model_states.pt... 0: [2022-11-24 21:08:00,926] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/layer_04-model_00-model_states.pt. 0: [2022-11-24 21:08:00,926] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/layer_05-model_00-model_states.pt... 0: [2022-11-24 21:08:00,997] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/layer_05-model_00-model_states.pt. 0: [2022-11-24 21:08:00,997] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/layer_06-model_00-model_states.pt... 0: [2022-11-24 21:08:01,072] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/layer_06-model_00-model_states.pt. 0: [2022-11-24 21:08:01,072] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/layer_07-model_00-model_states.pt... 0: [2022-11-24 21:08:01,144] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/layer_07-model_00-model_states.pt. 0: [2022-11-24 21:08:01,144] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/layer_08-model_00-model_states.pt... 0: [2022-11-24 21:08:01,220] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/layer_08-model_00-model_states.pt. 0: [2022-11-24 21:08:01,220] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/layer_09-model_00-model_states.pt... 0: [2022-11-24 21:08:01,294] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/layer_09-model_00-model_states.pt. 0: [2022-11-24 21:08:01,294] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/layer_10-model_00-model_states.pt... 0: [2022-11-24 21:08:01,366] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/layer_10-model_00-model_states.pt. 0: [2022-11-24 21:08:01,367] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/layer_11-model_00-model_states.pt... 0: [2022-11-24 21:08:01,442] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/layer_11-model_00-model_states.pt. 0: [2022-11-24 21:08:01,443] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/layer_12-model_00-model_states.pt... 0: [2022-11-24 21:08:01,517] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/layer_12-model_00-model_states.pt. 0: [2022-11-24 21:08:01,518] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/layer_13-model_00-model_states.pt... 0: [2022-11-24 21:08:01,591] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/layer_13-model_00-model_states.pt. 0: [2022-11-24 21:08:01,592] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/layer_14-model_00-model_states.pt... 0: [2022-11-24 21:08:01,665] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/layer_14-model_00-model_states.pt. 0: [2022-11-24 21:08:01,665] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/layer_15-model_00-model_states.pt... 0: [2022-11-24 21:08:01,737] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/layer_15-model_00-model_states.pt. 0: [2022-11-24 21:08:01,737] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/layer_16-model_00-model_states.pt... 0: [2022-11-24 21:08:01,813] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/layer_16-model_00-model_states.pt. 0: [2022-11-24 21:08:01,814] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/layer_17-model_00-model_states.pt... 0: [2022-11-24 21:08:01,888] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/layer_17-model_00-model_states.pt. 0: [2022-11-24 21:08:01,888] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/layer_18-model_00-model_states.pt... 0: [2022-11-24 21:08:01,960] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/layer_18-model_00-model_states.pt. 0: [2022-11-24 21:08:01,960] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/layer_19-model_00-model_states.pt... 0: [2022-11-24 21:08:02,037] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/layer_19-model_00-model_states.pt. 0: [2022-11-24 21:08:02,038] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/layer_20-model_00-model_states.pt... 0: [2022-11-24 21:08:02,111] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/layer_20-model_00-model_states.pt. 0: [2022-11-24 21:08:02,112] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/layer_21-model_00-model_states.pt... 0: [2022-11-24 21:08:02,186] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/layer_21-model_00-model_states.pt. 0: [2022-11-24 21:08:02,186] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/layer_22-model_00-model_states.pt... 0: [2022-11-24 21:08:02,260] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/layer_22-model_00-model_states.pt. 0: [2022-11-24 21:08:02,261] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/layer_23-model_00-model_states.pt... 0: [2022-11-24 21:08:02,332] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/layer_23-model_00-model_states.pt. 0: [2022-11-24 21:08:02,332] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/layer_24-model_00-model_states.pt... 0: [2022-11-24 21:08:02,410] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/layer_24-model_00-model_states.pt. 0: [2022-11-24 21:08:02,410] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/layer_25-model_00-model_states.pt... 0: [2022-11-24 21:08:02,483] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/layer_25-model_00-model_states.pt. 0: [2022-11-24 21:08:02,483] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/layer_26-model_00-model_states.pt... 0: [2022-11-24 21:08:02,560] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/layer_26-model_00-model_states.pt. 0: [2022-11-24 21:08:02,561] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/layer_27-model_00-model_states.pt... 0: [2022-11-24 21:08:02,634] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/layer_27-model_00-model_states.pt. 0: [2022-11-24 21:08:02,634] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/layer_28-model_00-model_states.pt... 0: [2022-11-24 21:08:02,710] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/layer_28-model_00-model_states.pt. 0: [2022-11-24 21:08:02,710] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/layer_30-model_00-model_states.pt... 0: [2022-11-24 21:08:02,711] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/layer_30-model_00-model_states.pt. 0: [2022-11-24 21:08:02,712] [INFO] [logging.py:68:log_dist] [Rank 0] Saving model checkpoint: checkpoints_1b1/global_step2000/mp_rank_00_model_states.pt 0: [2022-11-24 21:08:02,712] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/mp_rank_00_model_states.pt... 0: [2022-11-24 21:08:02,716] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/mp_rank_00_model_states.pt. 0: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt... 0: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt... 0: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt... 0: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt... 0: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt... 5: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt... 5: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt... 5: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt... 5: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt... 5: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt... 5: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt... 4: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt... 4: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt... 4: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt... 4: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt... 4: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt... 0: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt... 0: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt... 0: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt... 1: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt... 1: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt... 1: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt... 1: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt... 1: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt... 1: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt... 6: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt... 6: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt... 6: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt... 6: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt... 6: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt... 6: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt... 3: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt... 3: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt... 3: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt... 3: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt... 3: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt... 3: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt... 2: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt... 2: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt... 2: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt... 2: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt... 7: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt... 7: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt... 7: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt... 7: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt... 7: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt... 7: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt... 5: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt... 4: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt... 4: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt... 4: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt... 1: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt... 1: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt... 6: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt... 3: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt... 3: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt... 2: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt... 2: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt... 7: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt... 5: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt... 6: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt... 2: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt... 2: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt... 7: [2022-11-24 21:08:02,736] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2000/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt... 7: [2022-11-24 21:08:02,977] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt. 7: [2022-11-24 21:08:02,978] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt 7: [2022-11-24 21:08:02,978] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 2: [2022-11-24 21:08:02,980] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt. 2: [2022-11-24 21:08:02,980] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt 2: [2022-11-24 21:08:02,980] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 1: [2022-11-24 21:08:02,981] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt. 1: [2022-11-24 21:08:02,981] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt 1: [2022-11-24 21:08:02,981] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 7: [2022-11-24 21:08:02,985] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt. 7: [2022-11-24 21:08:02,985] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt 7: [2022-11-24 21:08:02,985] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 1: [2022-11-24 21:08:02,985] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt. 1: [2022-11-24 21:08:02,986] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt 1: [2022-11-24 21:08:02,986] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 3: [2022-11-24 21:08:02,976] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt. 3: [2022-11-24 21:08:02,977] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt 3: [2022-11-24 21:08:02,977] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 0: [2022-11-24 21:08:02,990] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt. 0: [2022-11-24 21:08:02,990] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt 0: [2022-11-24 21:08:02,990] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 4: [2022-11-24 21:08:02,990] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt. 4: [2022-11-24 21:08:02,990] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt 4: [2022-11-24 21:08:02,990] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 3: [2022-11-24 21:08:02,994] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt. 3: [2022-11-24 21:08:02,994] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt. 3: [2022-11-24 21:08:02,994] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt 3: [2022-11-24 21:08:02,994] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt 3: [2022-11-24 21:08:02,994] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 3: [2022-11-24 21:08:02,994] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 2: [2022-11-24 21:08:02,998] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt. 2: [2022-11-24 21:08:02,998] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt 2: [2022-11-24 21:08:02,998] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 2: [2022-11-24 21:08:02,999] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt. 2: [2022-11-24 21:08:02,999] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt 2: [2022-11-24 21:08:02,999] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 2: [2022-11-24 21:08:03,002] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt. 2: [2022-11-24 21:08:03,002] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt 2: [2022-11-24 21:08:03,002] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 4: [2022-11-24 21:08:03,003] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt. 4: [2022-11-24 21:08:03,003] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt 4: [2022-11-24 21:08:03,003] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 1: [2022-11-24 21:08:03,003] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt. 1: [2022-11-24 21:08:03,003] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt. 1: [2022-11-24 21:08:03,003] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt 1: [2022-11-24 21:08:03,003] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt 1: [2022-11-24 21:08:03,004] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 1: [2022-11-24 21:08:03,004] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 0: [2022-11-24 21:08:03,004] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt. 0: [2022-11-24 21:08:03,004] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt. 0: [2022-11-24 21:08:03,005] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt 0: [2022-11-24 21:08:03,005] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt 0: [2022-11-24 21:08:03,005] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 0: [2022-11-24 21:08:03,005] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 4: [2022-11-24 21:08:03,005] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt. 4: [2022-11-24 21:08:03,005] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt 4: [2022-11-24 21:08:03,005] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 1: [2022-11-24 21:08:03,005] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt. 1: [2022-11-24 21:08:03,005] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt. 1: [2022-11-24 21:08:03,005] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt 1: [2022-11-24 21:08:03,005] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt 1: [2022-11-24 21:08:03,005] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 1: [2022-11-24 21:08:03,005] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 3: [2022-11-24 21:08:03,007] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt. 3: [2022-11-24 21:08:03,007] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt 3: [2022-11-24 21:08:03,007] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 0: [2022-11-24 21:08:03,007] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt. 3: [2022-11-24 21:08:03,007] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt. 0: [2022-11-24 21:08:03,007] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt 3: [2022-11-24 21:08:03,007] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt 0: [2022-11-24 21:08:03,007] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 3: [2022-11-24 21:08:03,007] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 3: [2022-11-24 21:08:03,008] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt. 3: [2022-11-24 21:08:03,008] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt 3: [2022-11-24 21:08:03,008] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 2: [2022-11-24 21:08:03,010] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt. 2: [2022-11-24 21:08:03,010] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt 2: [2022-11-24 21:08:03,010] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 4: [2022-11-24 21:08:03,011] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt. 1: [2022-11-24 21:08:03,012] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt. 4: [2022-11-24 21:08:03,011] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt 4: [2022-11-24 21:08:03,011] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 1: [2022-11-24 21:08:03,012] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt 1: [2022-11-24 21:08:03,012] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 1: [2022-11-24 21:08:03,012] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt. 1: [2022-11-24 21:08:03,012] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt 1: [2022-11-24 21:08:03,012] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 4: [2022-11-24 21:08:03,012] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt. 4: [2022-11-24 21:08:03,012] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt 4: [2022-11-24 21:08:03,012] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 2: [2022-11-24 21:08:03,013] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt. 2: [2022-11-24 21:08:03,013] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt 2: [2022-11-24 21:08:03,013] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt. 2: [2022-11-24 21:08:03,013] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 2: [2022-11-24 21:08:03,013] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt 2: [2022-11-24 21:08:03,013] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 4: [2022-11-24 21:08:03,015] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt. 4: [2022-11-24 21:08:03,015] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt 4: [2022-11-24 21:08:03,015] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 3: [2022-11-24 21:08:03,016] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt. 3: [2022-11-24 21:08:03,016] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt 3: [2022-11-24 21:08:03,016] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 3: [2022-11-24 21:08:03,016] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt. 3: [2022-11-24 21:08:03,016] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt 3: [2022-11-24 21:08:03,016] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 2: [2022-11-24 21:08:03,018] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt. 2: [2022-11-24 21:08:03,018] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt 2: [2022-11-24 21:08:03,018] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 4: [2022-11-24 21:08:03,024] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt. 4: [2022-11-24 21:08:03,024] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt 4: [2022-11-24 21:08:03,024] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 4: [2022-11-24 21:08:03,025] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt. 4: [2022-11-24 21:08:03,025] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt 4: [2022-11-24 21:08:03,025] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 5: [2022-11-24 21:08:03,047] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt. 5: [2022-11-24 21:08:03,047] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt 5: [2022-11-24 21:08:03,047] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 5: [2022-11-24 21:08:03,048] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt. 5: [2022-11-24 21:08:03,048] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt 5: [2022-11-24 21:08:03,048] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 0: [2022-11-24 21:08:03,050] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt. 0: [2022-11-24 21:08:03,050] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt. 0: [2022-11-24 21:08:03,050] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt. 0: [2022-11-24 21:08:03,050] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt 0: [2022-11-24 21:08:03,050] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt. 0: [2022-11-24 21:08:03,050] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt 0: [2022-11-24 21:08:03,050] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 0: [2022-11-24 21:08:03,050] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 0: [2022-11-24 21:08:03,050] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt 0: [2022-11-24 21:08:03,050] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 7: [2022-11-24 21:08:03,054] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt. 7: [2022-11-24 21:08:03,054] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt 7: [2022-11-24 21:08:03,054] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 6: [2022-11-24 21:08:03,054] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt. 6: [2022-11-24 21:08:03,054] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt 6: [2022-11-24 21:08:03,054] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 6: [2022-11-24 21:08:03,076] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt. 6: [2022-11-24 21:08:03,076] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt 6: [2022-11-24 21:08:03,076] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 7: [2022-11-24 21:08:03,078] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt. 7: [2022-11-24 21:08:03,078] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt 7: [2022-11-24 21:08:03,079] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 0: [2022-11-24 21:08:03,080] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt 0: [2022-11-24 21:08:03,080] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 7: [2022-11-24 21:08:03,081] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt. 7: [2022-11-24 21:08:03,081] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt 7: [2022-11-24 21:08:03,081] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 6: [2022-11-24 21:08:03,088] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt. 6: [2022-11-24 21:08:03,088] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt 6: [2022-11-24 21:08:03,088] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 6: [2022-11-24 21:08:03,089] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt. 6: [2022-11-24 21:08:03,089] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt 6: [2022-11-24 21:08:03,089] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 5: [2022-11-24 21:08:03,110] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt. 5: [2022-11-24 21:08:03,110] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt 5: [2022-11-24 21:08:03,110] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 6: [2022-11-24 21:08:03,114] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt. 6: [2022-11-24 21:08:03,114] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt 6: [2022-11-24 21:08:03,114] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 6: [2022-11-24 21:08:03,115] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt. 6: [2022-11-24 21:08:03,115] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt 6: [2022-11-24 21:08:03,115] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 6: [2022-11-24 21:08:03,119] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt. 6: [2022-11-24 21:08:03,120] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt 6: [2022-11-24 21:08:03,120] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 7: [2022-11-24 21:08:03,195] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt. 7: [2022-11-24 21:08:03,195] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt 7: [2022-11-24 21:08:03,195] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 5: [2022-11-24 21:08:03,199] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt. 5: [2022-11-24 21:08:03,199] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt 5: [2022-11-24 21:08:03,199] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 5: [2022-11-24 21:08:03,208] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt. 5: [2022-11-24 21:08:03,208] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt 5: [2022-11-24 21:08:03,209] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 6: [2022-11-24 21:08:03,212] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt. 6: [2022-11-24 21:08:03,212] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt 6: [2022-11-24 21:08:03,212] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 7: [2022-11-24 21:08:03,261] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt. 7: [2022-11-24 21:08:03,261] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt 7: [2022-11-24 21:08:03,261] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 5: [2022-11-24 21:08:03,281] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt. 5: [2022-11-24 21:08:03,281] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt 5: [2022-11-24 21:08:03,281] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 5: [2022-11-24 21:08:03,284] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt. 5: [2022-11-24 21:08:03,284] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt 5: [2022-11-24 21:08:03,284] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 5: [2022-11-24 21:08:03,315] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt. 5: [2022-11-24 21:08:03,315] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt 5: [2022-11-24 21:08:03,315] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 7: [2022-11-24 21:08:03,367] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt. 7: [2022-11-24 21:08:03,367] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2000/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt 7: [2022-11-24 21:08:03,367] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2000 is ready now! 0: successfully saved checkpoint at iteration 2000 to checkpoints_1b1 7: time (ms) | save-checkpoint: 2795.22 7: iteration 2010/ 2891 | consumed samples: 514560 | consumed tokens: 1053818880 | elapsed time per iteration (s): 1.62 | learning rate: 5.894E-05 | global batch size: 256 | lm loss: 2.919459E+00 | grad norm: 0.270 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 158.371 | TFLOPs: 38.32 | 7: iteration 2020/ 2891 | consumed samples: 517120 | consumed tokens: 1059061760 | elapsed time per iteration (s): 1.27 | learning rate: 5.813E-05 | global batch size: 256 | lm loss: 2.932092E+00 | grad norm: 0.275 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.012 | TFLOPs: 48.64 | 7: iteration 2030/ 2891 | consumed samples: 519680 | consumed tokens: 1064304640 | elapsed time per iteration (s): 1.27 | learning rate: 5.733E-05 | global batch size: 256 | lm loss: 2.853943E+00 | grad norm: 0.269 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.901 | TFLOPs: 48.86 | 7: iteration 2040/ 2891 | consumed samples: 522240 | consumed tokens: 1069547520 | elapsed time per iteration (s): 1.28 | learning rate: 5.653E-05 | global batch size: 256 | lm loss: 2.877859E+00 | grad norm: 0.253 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.940 | TFLOPs: 48.38 | 7: iteration 2050/ 2891 | consumed samples: 524800 | consumed tokens: 1074790400 | elapsed time per iteration (s): 1.28 | learning rate: 5.574E-05 | global batch size: 256 | lm loss: 2.905728E+00 | grad norm: 0.248 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.470 | TFLOPs: 48.51 | 7: iteration 2060/ 2891 | consumed samples: 527360 | consumed tokens: 1080033280 | elapsed time per iteration (s): 1.26 | learning rate: 5.495E-05 | global batch size: 256 | lm loss: 2.879441E+00 | grad norm: 0.245 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.919 | TFLOPs: 49.10 | 7: iteration 2070/ 2891 | consumed samples: 529920 | consumed tokens: 1085276160 | elapsed time per iteration (s): 1.28 | learning rate: 5.418E-05 | global batch size: 256 | lm loss: 2.857717E+00 | grad norm: 0.251 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.535 | TFLOPs: 48.53 | 7: iteration 2080/ 2891 | consumed samples: 532480 | consumed tokens: 1090519040 | elapsed time per iteration (s): 1.27 | learning rate: 5.340E-05 | global batch size: 256 | lm loss: 2.878311E+00 | grad norm: 0.254 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.915 | TFLOPs: 48.86 | 7: iteration 2090/ 2891 | consumed samples: 535040 | consumed tokens: 1095761920 | elapsed time per iteration (s): 1.26 | learning rate: 5.264E-05 | global batch size: 256 | lm loss: 2.871052E+00 | grad norm: 0.251 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 203.402 | TFLOPs: 49.22 | 7: iteration 2100/ 2891 | consumed samples: 537600 | consumed tokens: 1101004800 | elapsed time per iteration (s): 1.26 | learning rate: 5.188E-05 | global batch size: 256 | lm loss: 2.860960E+00 | grad norm: 0.264 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.576 | TFLOPs: 49.02 | 7: iteration 2110/ 2891 | consumed samples: 540160 | consumed tokens: 1106247680 | elapsed time per iteration (s): 1.27 | learning rate: 5.113E-05 | global batch size: 256 | lm loss: 2.848944E+00 | grad norm: 0.334 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.274 | TFLOPs: 48.71 | 7: iteration 2120/ 2891 | consumed samples: 542720 | consumed tokens: 1111490560 | elapsed time per iteration (s): 1.26 | learning rate: 5.039E-05 | global batch size: 256 | lm loss: 2.862064E+00 | grad norm: 0.257 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 203.001 | TFLOPs: 49.12 | 7: iteration 2130/ 2891 | consumed samples: 545280 | consumed tokens: 1116733440 | elapsed time per iteration (s): 1.27 | learning rate: 4.965E-05 | global batch size: 256 | lm loss: 2.862977E+00 | grad norm: 0.261 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.476 | TFLOPs: 48.76 | 7: iteration 2140/ 2891 | consumed samples: 547840 | consumed tokens: 1121976320 | elapsed time per iteration (s): 1.27 | learning rate: 4.892E-05 | global batch size: 256 | lm loss: 2.858904E+00 | grad norm: 0.257 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.108 | TFLOPs: 48.91 | 7: iteration 2150/ 2891 | consumed samples: 550400 | consumed tokens: 1127219200 | elapsed time per iteration (s): 1.29 | learning rate: 4.820E-05 | global batch size: 256 | lm loss: 2.848347E+00 | grad norm: 0.266 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.874 | TFLOPs: 48.13 | 7: iteration 2160/ 2891 | consumed samples: 552960 | consumed tokens: 1132462080 | elapsed time per iteration (s): 1.29 | learning rate: 4.749E-05 | global batch size: 256 | lm loss: 2.851582E+00 | grad norm: 0.239 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.615 | TFLOPs: 48.06 | 7: iteration 2170/ 2891 | consumed samples: 555520 | consumed tokens: 1137704960 | elapsed time per iteration (s): 1.32 | learning rate: 4.678E-05 | global batch size: 256 | lm loss: 2.868348E+00 | grad norm: 0.255 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 193.430 | TFLOPs: 46.81 | 7: iteration 2180/ 2891 | consumed samples: 558080 | consumed tokens: 1142947840 | elapsed time per iteration (s): 1.27 | learning rate: 4.608E-05 | global batch size: 256 | lm loss: 2.876728E+00 | grad norm: 0.265 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.566 | TFLOPs: 48.78 | 7: iteration 2190/ 2891 | consumed samples: 560640 | consumed tokens: 1148190720 | elapsed time per iteration (s): 1.29 | learning rate: 4.539E-05 | global batch size: 256 | lm loss: 2.851987E+00 | grad norm: 0.249 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.987 | TFLOPs: 48.15 | 7: iteration 2200/ 2891 | consumed samples: 563200 | consumed tokens: 1153433600 | elapsed time per iteration (s): 1.29 | learning rate: 4.471E-05 | global batch size: 256 | lm loss: 2.875356E+00 | grad norm: 0.249 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.981 | TFLOPs: 48.15 | 7: iteration 2210/ 2891 | consumed samples: 565760 | consumed tokens: 1158676480 | elapsed time per iteration (s): 1.26 | learning rate: 4.403E-05 | global batch size: 256 | lm loss: 2.827237E+00 | grad norm: 0.245 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.469 | TFLOPs: 49.00 | 7: iteration 2220/ 2891 | consumed samples: 568320 | consumed tokens: 1163919360 | elapsed time per iteration (s): 1.28 | learning rate: 4.336E-05 | global batch size: 256 | lm loss: 2.894486E+00 | grad norm: 0.261 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.990 | TFLOPs: 48.40 | 7: iteration 2230/ 2891 | consumed samples: 570880 | consumed tokens: 1169162240 | elapsed time per iteration (s): 1.27 | learning rate: 4.270E-05 | global batch size: 256 | lm loss: 2.848923E+00 | grad norm: 0.255 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.553 | TFLOPs: 48.77 | 7: iteration 2240/ 2891 | consumed samples: 573440 | consumed tokens: 1174405120 | elapsed time per iteration (s): 1.26 | learning rate: 4.205E-05 | global batch size: 256 | lm loss: 2.874922E+00 | grad norm: 0.256 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.808 | TFLOPs: 49.08 | 7: iteration 2250/ 2891 | consumed samples: 576000 | consumed tokens: 1179648000 | elapsed time per iteration (s): 1.27 | learning rate: 4.141E-05 | global batch size: 256 | lm loss: 2.836610E+00 | grad norm: 0.243 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.706 | TFLOPs: 48.81 | 7: iteration 2260/ 2891 | consumed samples: 578560 | consumed tokens: 1184890880 | elapsed time per iteration (s): 1.27 | learning rate: 4.077E-05 | global batch size: 256 | lm loss: 2.804289E+00 | grad norm: 0.240 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.666 | TFLOPs: 48.80 | 7: iteration 2270/ 2891 | consumed samples: 581120 | consumed tokens: 1190133760 | elapsed time per iteration (s): 1.27 | learning rate: 4.014E-05 | global batch size: 256 | lm loss: 2.808503E+00 | grad norm: 0.257 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.374 | TFLOPs: 48.73 | 7: iteration 2280/ 2891 | consumed samples: 583680 | consumed tokens: 1195376640 | elapsed time per iteration (s): 1.29 | learning rate: 3.953E-05 | global batch size: 256 | lm loss: 2.833229E+00 | grad norm: 0.264 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.831 | TFLOPs: 48.11 | 7: iteration 2290/ 2891 | consumed samples: 586240 | consumed tokens: 1200619520 | elapsed time per iteration (s): 1.27 | learning rate: 3.892E-05 | global batch size: 256 | lm loss: 2.864612E+00 | grad norm: 0.266 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.051 | TFLOPs: 48.65 | 7: iteration 2300/ 2891 | consumed samples: 588800 | consumed tokens: 1205862400 | elapsed time per iteration (s): 1.29 | learning rate: 3.831E-05 | global batch size: 256 | lm loss: 2.838851E+00 | grad norm: 0.251 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.727 | TFLOPs: 47.85 | 7: iteration 2310/ 2891 | consumed samples: 591360 | consumed tokens: 1211105280 | elapsed time per iteration (s): 1.28 | learning rate: 3.772E-05 | global batch size: 256 | lm loss: 2.862267E+00 | grad norm: 0.259 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.488 | TFLOPs: 48.52 | 7: iteration 2320/ 2891 | consumed samples: 593920 | consumed tokens: 1216348160 | elapsed time per iteration (s): 1.27 | learning rate: 3.714E-05 | global batch size: 256 | lm loss: 2.854469E+00 | grad norm: 0.250 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.255 | TFLOPs: 48.70 | 7: iteration 2330/ 2891 | consumed samples: 596480 | consumed tokens: 1221591040 | elapsed time per iteration (s): 1.28 | learning rate: 3.656E-05 | global batch size: 256 | lm loss: 2.785706E+00 | grad norm: 0.255 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.445 | TFLOPs: 48.26 | 7: iteration 2340/ 2891 | consumed samples: 599040 | consumed tokens: 1226833920 | elapsed time per iteration (s): 1.27 | learning rate: 3.600E-05 | global batch size: 256 | lm loss: 2.798930E+00 | grad norm: 0.251 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.959 | TFLOPs: 48.63 | 7: iteration 2350/ 2891 | consumed samples: 601600 | consumed tokens: 1232076800 | elapsed time per iteration (s): 1.27 | learning rate: 3.544E-05 | global batch size: 256 | lm loss: 2.827826E+00 | grad norm: 0.252 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.774 | TFLOPs: 48.83 | 7: iteration 2360/ 2891 | consumed samples: 604160 | consumed tokens: 1237319680 | elapsed time per iteration (s): 1.28 | learning rate: 3.489E-05 | global batch size: 256 | lm loss: 2.845606E+00 | grad norm: 0.253 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.486 | TFLOPs: 48.27 | 7: iteration 2370/ 2891 | consumed samples: 606720 | consumed tokens: 1242562560 | elapsed time per iteration (s): 1.29 | learning rate: 3.435E-05 | global batch size: 256 | lm loss: 2.854224E+00 | grad norm: 0.252 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.010 | TFLOPs: 48.16 | 7: iteration 2380/ 2891 | consumed samples: 609280 | consumed tokens: 1247805440 | elapsed time per iteration (s): 1.29 | learning rate: 3.382E-05 | global batch size: 256 | lm loss: 2.833812E+00 | grad norm: 0.255 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.147 | TFLOPs: 48.19 | 7: iteration 2390/ 2891 | consumed samples: 611840 | consumed tokens: 1253048320 | elapsed time per iteration (s): 1.28 | learning rate: 3.330E-05 | global batch size: 256 | lm loss: 2.822174E+00 | grad norm: 0.262 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.528 | TFLOPs: 48.28 | 7: iteration 2400/ 2891 | consumed samples: 614400 | consumed tokens: 1258291200 | elapsed time per iteration (s): 1.27 | learning rate: 3.279E-05 | global batch size: 256 | lm loss: 2.868549E+00 | grad norm: 0.248 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.976 | TFLOPs: 48.63 | 7: iteration 2410/ 2891 | consumed samples: 616960 | consumed tokens: 1263534080 | elapsed time per iteration (s): 1.27 | learning rate: 3.228E-05 | global batch size: 256 | lm loss: 2.831634E+00 | grad norm: 0.267 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.641 | TFLOPs: 48.80 | 7: iteration 2420/ 2891 | consumed samples: 619520 | consumed tokens: 1268776960 | elapsed time per iteration (s): 1.28 | learning rate: 3.179E-05 | global batch size: 256 | lm loss: 2.823357E+00 | grad norm: 0.239 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.067 | TFLOPs: 48.41 | 7: iteration 2430/ 2891 | consumed samples: 622080 | consumed tokens: 1274019840 | elapsed time per iteration (s): 1.26 | learning rate: 3.131E-05 | global batch size: 256 | lm loss: 2.828638E+00 | grad norm: 0.246 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.396 | TFLOPs: 48.98 | 7: iteration 2440/ 2891 | consumed samples: 624640 | consumed tokens: 1279262720 | elapsed time per iteration (s): 1.27 | learning rate: 3.083E-05 | global batch size: 256 | lm loss: 2.800687E+00 | grad norm: 0.245 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.414 | TFLOPs: 48.74 | 7: iteration 2450/ 2891 | consumed samples: 627200 | consumed tokens: 1284505600 | elapsed time per iteration (s): 1.26 | learning rate: 3.037E-05 | global batch size: 256 | lm loss: 2.824464E+00 | grad norm: 0.244 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 203.599 | TFLOPs: 49.27 | 7: iteration 2460/ 2891 | consumed samples: 629760 | consumed tokens: 1289748480 | elapsed time per iteration (s): 1.29 | learning rate: 2.991E-05 | global batch size: 256 | lm loss: 2.851115E+00 | grad norm: 0.251 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.887 | TFLOPs: 48.13 | 7: iteration 2470/ 2891 | consumed samples: 632320 | consumed tokens: 1294991360 | elapsed time per iteration (s): 1.26 | learning rate: 2.947E-05 | global batch size: 256 | lm loss: 2.820526E+00 | grad norm: 0.254 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.981 | TFLOPs: 49.12 | 7: iteration 2480/ 2891 | consumed samples: 634880 | consumed tokens: 1300234240 | elapsed time per iteration (s): 1.27 | learning rate: 2.903E-05 | global batch size: 256 | lm loss: 2.799962E+00 | grad norm: 0.250 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.887 | TFLOPs: 48.85 | 7: iteration 2490/ 2891 | consumed samples: 637440 | consumed tokens: 1305477120 | elapsed time per iteration (s): 1.27 | learning rate: 2.860E-05 | global batch size: 256 | lm loss: 2.799555E+00 | grad norm: 0.242 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.814 | TFLOPs: 48.59 | 7: iteration 2500/ 2891 | consumed samples: 640000 | consumed tokens: 1310720000 | elapsed time per iteration (s): 1.26 | learning rate: 2.819E-05 | global batch size: 256 | lm loss: 2.822092E+00 | grad norm: 0.256 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 203.382 | TFLOPs: 49.22 | 7: iteration 2510/ 2891 | consumed samples: 642560 | consumed tokens: 1315962880 | elapsed time per iteration (s): 1.27 | learning rate: 2.778E-05 | global batch size: 256 | lm loss: 2.835498E+00 | grad norm: 0.247 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.136 | TFLOPs: 48.67 | 7: iteration 2520/ 2891 | consumed samples: 645120 | consumed tokens: 1321205760 | elapsed time per iteration (s): 1.27 | learning rate: 2.738E-05 | global batch size: 256 | lm loss: 2.794395E+00 | grad norm: 0.235 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.216 | TFLOPs: 48.69 | 7: iteration 2530/ 2891 | consumed samples: 647680 | consumed tokens: 1326448640 | elapsed time per iteration (s): 1.27 | learning rate: 2.700E-05 | global batch size: 256 | lm loss: 2.808821E+00 | grad norm: 0.240 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.007 | TFLOPs: 48.88 | 7: iteration 2540/ 2891 | consumed samples: 650240 | consumed tokens: 1331691520 | elapsed time per iteration (s): 1.27 | learning rate: 2.662E-05 | global batch size: 256 | lm loss: 2.793146E+00 | grad norm: 0.250 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.216 | TFLOPs: 48.69 | 7: iteration 2550/ 2891 | consumed samples: 652800 | consumed tokens: 1336934400 | elapsed time per iteration (s): 1.28 | learning rate: 2.625E-05 | global batch size: 256 | lm loss: 2.770286E+00 | grad norm: 0.246 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.289 | TFLOPs: 48.47 | 7: iteration 2560/ 2891 | consumed samples: 655360 | consumed tokens: 1342177280 | elapsed time per iteration (s): 1.28 | learning rate: 2.590E-05 | global batch size: 256 | lm loss: 2.784430E+00 | grad norm: 0.259 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.743 | TFLOPs: 48.34 | 7: iteration 2570/ 2891 | consumed samples: 657920 | consumed tokens: 1347420160 | elapsed time per iteration (s): 1.26 | learning rate: 2.555E-05 | global batch size: 256 | lm loss: 2.799257E+00 | grad norm: 0.246 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 203.123 | TFLOPs: 49.15 | 7: iteration 2580/ 2891 | consumed samples: 660480 | consumed tokens: 1352663040 | elapsed time per iteration (s): 1.27 | learning rate: 2.521E-05 | global batch size: 256 | lm loss: 2.804961E+00 | grad norm: 0.242 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.170 | TFLOPs: 48.68 | 7: iteration 2590/ 2891 | consumed samples: 663040 | consumed tokens: 1357905920 | elapsed time per iteration (s): 1.28 | learning rate: 2.489E-05 | global batch size: 256 | lm loss: 2.828386E+00 | grad norm: 0.242 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.053 | TFLOPs: 48.41 | 7: iteration 2600/ 2891 | consumed samples: 665600 | consumed tokens: 1363148800 | elapsed time per iteration (s): 1.27 | learning rate: 2.457E-05 | global batch size: 256 | lm loss: 2.777740E+00 | grad norm: 0.253 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.953 | TFLOPs: 48.87 | 7: iteration 2610/ 2891 | consumed samples: 668160 | consumed tokens: 1368391680 | elapsed time per iteration (s): 1.27 | learning rate: 2.427E-05 | global batch size: 256 | lm loss: 2.815404E+00 | grad norm: 0.243 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.950 | TFLOPs: 48.63 | 7: iteration 2620/ 2891 | consumed samples: 670720 | consumed tokens: 1373634560 | elapsed time per iteration (s): 1.28 | learning rate: 2.397E-05 | global batch size: 256 | lm loss: 2.825298E+00 | grad norm: 0.247 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.339 | TFLOPs: 48.48 | 7: iteration 2630/ 2891 | consumed samples: 673280 | consumed tokens: 1378877440 | elapsed time per iteration (s): 1.27 | learning rate: 2.369E-05 | global batch size: 256 | lm loss: 2.799451E+00 | grad norm: 0.246 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.902 | TFLOPs: 48.86 | 7: iteration 2640/ 2891 | consumed samples: 675840 | consumed tokens: 1384120320 | elapsed time per iteration (s): 1.28 | learning rate: 2.341E-05 | global batch size: 256 | lm loss: 2.847309E+00 | grad norm: 0.250 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.515 | TFLOPs: 48.52 | 7: iteration 2650/ 2891 | consumed samples: 678400 | consumed tokens: 1389363200 | elapsed time per iteration (s): 1.28 | learning rate: 2.315E-05 | global batch size: 256 | lm loss: 2.814650E+00 | grad norm: 0.242 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.473 | TFLOPs: 48.27 | 7: iteration 2660/ 2891 | consumed samples: 680960 | consumed tokens: 1394606080 | elapsed time per iteration (s): 1.28 | learning rate: 2.289E-05 | global batch size: 256 | lm loss: 2.785754E+00 | grad norm: 0.248 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.125 | TFLOPs: 48.43 | 7: iteration 2670/ 2891 | consumed samples: 683520 | consumed tokens: 1399848960 | elapsed time per iteration (s): 1.30 | learning rate: 2.265E-05 | global batch size: 256 | lm loss: 2.793461E+00 | grad norm: 0.246 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.358 | TFLOPs: 47.76 | 7: iteration 2680/ 2891 | consumed samples: 686080 | consumed tokens: 1405091840 | elapsed time per iteration (s): 1.28 | learning rate: 2.242E-05 | global batch size: 256 | lm loss: 2.760545E+00 | grad norm: 0.250 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.544 | TFLOPs: 48.53 | 7: iteration 2690/ 2891 | consumed samples: 688640 | consumed tokens: 1410334720 | elapsed time per iteration (s): 1.27 | learning rate: 2.220E-05 | global batch size: 256 | lm loss: 2.787152E+00 | grad norm: 0.249 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.989 | TFLOPs: 48.64 | 7: iteration 2700/ 2891 | consumed samples: 691200 | consumed tokens: 1415577600 | elapsed time per iteration (s): 1.28 | learning rate: 2.198E-05 | global batch size: 256 | lm loss: 2.798705E+00 | grad norm: 0.252 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.327 | TFLOPs: 48.24 | 7: iteration 2710/ 2891 | consumed samples: 693760 | consumed tokens: 1420820480 | elapsed time per iteration (s): 1.27 | learning rate: 2.178E-05 | global batch size: 256 | lm loss: 2.760976E+00 | grad norm: 0.250 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.992 | TFLOPs: 48.64 | 7: iteration 2720/ 2891 | consumed samples: 696320 | consumed tokens: 1426063360 | elapsed time per iteration (s): 1.27 | learning rate: 2.159E-05 | global batch size: 256 | lm loss: 2.821980E+00 | grad norm: 0.249 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.479 | TFLOPs: 48.76 | 7: iteration 2730/ 2891 | consumed samples: 698880 | consumed tokens: 1431306240 | elapsed time per iteration (s): 1.26 | learning rate: 2.141E-05 | global batch size: 256 | lm loss: 2.780820E+00 | grad norm: 0.258 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.844 | TFLOPs: 49.09 | 7: iteration 2740/ 2891 | consumed samples: 701440 | consumed tokens: 1436549120 | elapsed time per iteration (s): 1.29 | learning rate: 2.124E-05 | global batch size: 256 | lm loss: 2.757240E+00 | grad norm: 0.240 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.089 | TFLOPs: 48.18 | 7: iteration 2750/ 2891 | consumed samples: 704000 | consumed tokens: 1441792000 | elapsed time per iteration (s): 1.29 | learning rate: 2.109E-05 | global batch size: 256 | lm loss: 2.767885E+00 | grad norm: 0.255 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 197.871 | TFLOPs: 47.88 | 7: iteration 2760/ 2891 | consumed samples: 706560 | consumed tokens: 1447034880 | elapsed time per iteration (s): 1.26 | learning rate: 2.094E-05 | global batch size: 256 | lm loss: 2.782544E+00 | grad norm: 0.277 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.573 | TFLOPs: 49.02 | 7: iteration 2770/ 2891 | consumed samples: 709120 | consumed tokens: 1452277760 | elapsed time per iteration (s): 1.27 | learning rate: 2.080E-05 | global batch size: 256 | lm loss: 2.807351E+00 | grad norm: 0.258 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.197 | TFLOPs: 48.69 | 7: iteration 2780/ 2891 | consumed samples: 711680 | consumed tokens: 1457520640 | elapsed time per iteration (s): 1.32 | learning rate: 2.068E-05 | global batch size: 256 | lm loss: 2.795422E+00 | grad norm: 0.259 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 193.852 | TFLOPs: 46.91 | 7: iteration 2790/ 2891 | consumed samples: 714240 | consumed tokens: 1462763520 | elapsed time per iteration (s): 1.27 | learning rate: 2.056E-05 | global batch size: 256 | lm loss: 2.809049E+00 | grad norm: 0.261 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.671 | TFLOPs: 48.80 | 7: iteration 2800/ 2891 | consumed samples: 716800 | consumed tokens: 1468006400 | elapsed time per iteration (s): 1.28 | learning rate: 2.046E-05 | global batch size: 256 | lm loss: 2.831892E+00 | grad norm: 0.249 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.506 | TFLOPs: 48.52 | 7: iteration 2810/ 2891 | consumed samples: 719360 | consumed tokens: 1473249280 | elapsed time per iteration (s): 1.29 | learning rate: 2.036E-05 | global batch size: 256 | lm loss: 2.751667E+00 | grad norm: 0.251 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 198.767 | TFLOPs: 48.10 | 7: iteration 2820/ 2891 | consumed samples: 721920 | consumed tokens: 1478492160 | elapsed time per iteration (s): 1.27 | learning rate: 2.028E-05 | global batch size: 256 | lm loss: 2.827281E+00 | grad norm: 0.256 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.891 | TFLOPs: 48.86 | 7: iteration 2830/ 2891 | consumed samples: 724480 | consumed tokens: 1483735040 | elapsed time per iteration (s): 1.30 | learning rate: 2.021E-05 | global batch size: 256 | lm loss: 2.819973E+00 | grad norm: 0.260 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 196.456 | TFLOPs: 47.54 | 7: iteration 2840/ 2891 | consumed samples: 727040 | consumed tokens: 1488977920 | elapsed time per iteration (s): 1.28 | learning rate: 2.014E-05 | global batch size: 256 | lm loss: 2.776796E+00 | grad norm: 0.281 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 199.591 | TFLOPs: 48.30 | 7: iteration 2850/ 2891 | consumed samples: 729600 | consumed tokens: 1494220800 | elapsed time per iteration (s): 1.27 | learning rate: 2.009E-05 | global batch size: 256 | lm loss: 2.797709E+00 | grad norm: 0.252 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 201.425 | TFLOPs: 48.74 | 7: iteration 2860/ 2891 | consumed samples: 732160 | consumed tokens: 1499463680 | elapsed time per iteration (s): 1.27 | learning rate: 2.005E-05 | global batch size: 256 | lm loss: 2.794808E+00 | grad norm: 0.240 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.000 | TFLOPs: 48.88 | 7: iteration 2870/ 2891 | consumed samples: 734720 | consumed tokens: 1504706560 | elapsed time per iteration (s): 1.28 | learning rate: 2.003E-05 | global batch size: 256 | lm loss: 2.803428E+00 | grad norm: 0.246 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.153 | TFLOPs: 48.44 | 7: iteration 2880/ 2891 | consumed samples: 737280 | consumed tokens: 1509949440 | elapsed time per iteration (s): 1.26 | learning rate: 2.001E-05 | global batch size: 256 | lm loss: 2.790293E+00 | grad norm: 0.242 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 202.895 | TFLOPs: 49.10 | 7: iteration 2890/ 2891 | consumed samples: 739840 | consumed tokens: 1515192320 | elapsed time per iteration (s): 1.28 | learning rate: 2.000E-05 | global batch size: 256 | lm loss: 2.775094E+00 | grad norm: 0.258 | num zeros: 0.0 | number of skipped iterations: 0 | number of nan iterations: 0 | samples per second: 200.463 | TFLOPs: 48.51 | 0: [after training is done] datetime: 2022-11-24 21:27:00 0: saving checkpoint at iteration 2891 to checkpoints_1b1 7: ------------------------------------------------------------------------------------------------------------ 7: valid loss at the end of training for val data | lm loss value: 2.731300E+00 | lm loss PPL: 1.535284E+01 | 7: ------------------------------------------------------------------------------------------------------------ 0: [2022-11-24 21:27:00,642] [INFO] [logging.py:68:log_dist] [Rank 0] [Torch] Checkpoint global_step2891 is begin to save! 0: [2022-11-24 21:27:00,645] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/layer_01-model_00-model_states.pt... 0: [2022-11-24 21:27:00,837] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/layer_01-model_00-model_states.pt. 0: [2022-11-24 21:27:00,838] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/layer_03-model_00-model_states.pt... 0: [2022-11-24 21:27:00,915] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/layer_03-model_00-model_states.pt. 0: [2022-11-24 21:27:00,915] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/layer_04-model_00-model_states.pt... 0: [2022-11-24 21:27:00,988] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/layer_04-model_00-model_states.pt. 0: [2022-11-24 21:27:00,988] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/layer_05-model_00-model_states.pt... 0: [2022-11-24 21:27:01,061] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/layer_05-model_00-model_states.pt. 0: [2022-11-24 21:27:01,062] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/layer_06-model_00-model_states.pt... 0: [2022-11-24 21:27:01,135] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/layer_06-model_00-model_states.pt. 0: [2022-11-24 21:27:01,135] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/layer_07-model_00-model_states.pt... 0: [2022-11-24 21:27:01,208] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/layer_07-model_00-model_states.pt. 0: [2022-11-24 21:27:01,208] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/layer_08-model_00-model_states.pt... 0: [2022-11-24 21:27:01,282] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/layer_08-model_00-model_states.pt. 0: [2022-11-24 21:27:01,282] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/layer_09-model_00-model_states.pt... 0: [2022-11-24 21:27:01,355] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/layer_09-model_00-model_states.pt. 0: [2022-11-24 21:27:01,355] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/layer_10-model_00-model_states.pt... 0: [2022-11-24 21:27:01,424] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/layer_10-model_00-model_states.pt. 0: [2022-11-24 21:27:01,424] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/layer_11-model_00-model_states.pt... 0: [2022-11-24 21:27:01,501] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/layer_11-model_00-model_states.pt. 0: [2022-11-24 21:27:01,501] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/layer_12-model_00-model_states.pt... 0: [2022-11-24 21:27:01,575] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/layer_12-model_00-model_states.pt. 0: [2022-11-24 21:27:01,576] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/layer_13-model_00-model_states.pt... 0: [2022-11-24 21:27:01,649] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/layer_13-model_00-model_states.pt. 0: [2022-11-24 21:27:01,650] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/layer_14-model_00-model_states.pt... 0: [2022-11-24 21:27:01,722] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/layer_14-model_00-model_states.pt. 0: [2022-11-24 21:27:01,723] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/layer_15-model_00-model_states.pt... 0: [2022-11-24 21:27:01,795] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/layer_15-model_00-model_states.pt. 0: [2022-11-24 21:27:01,796] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/layer_16-model_00-model_states.pt... 0: [2022-11-24 21:27:01,866] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/layer_16-model_00-model_states.pt. 0: [2022-11-24 21:27:01,866] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/layer_17-model_00-model_states.pt... 0: [2022-11-24 21:27:01,940] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/layer_17-model_00-model_states.pt. 0: [2022-11-24 21:27:01,940] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/layer_18-model_00-model_states.pt... 0: [2022-11-24 21:27:02,016] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/layer_18-model_00-model_states.pt. 0: [2022-11-24 21:27:02,016] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/layer_19-model_00-model_states.pt... 0: [2022-11-24 21:27:02,090] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/layer_19-model_00-model_states.pt. 0: [2022-11-24 21:27:02,090] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/layer_20-model_00-model_states.pt... 0: [2022-11-24 21:27:02,161] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/layer_20-model_00-model_states.pt. 0: [2022-11-24 21:27:02,162] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/layer_21-model_00-model_states.pt... 0: [2022-11-24 21:27:02,235] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/layer_21-model_00-model_states.pt. 0: [2022-11-24 21:27:02,236] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/layer_22-model_00-model_states.pt... 0: [2022-11-24 21:27:02,310] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/layer_22-model_00-model_states.pt. 0: [2022-11-24 21:27:02,310] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/layer_23-model_00-model_states.pt... 0: [2022-11-24 21:27:02,384] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/layer_23-model_00-model_states.pt. 0: [2022-11-24 21:27:02,384] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/layer_24-model_00-model_states.pt... 0: [2022-11-24 21:27:02,455] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/layer_24-model_00-model_states.pt. 0: [2022-11-24 21:27:02,455] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/layer_25-model_00-model_states.pt... 0: [2022-11-24 21:27:02,528] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/layer_25-model_00-model_states.pt. 0: [2022-11-24 21:27:02,528] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/layer_26-model_00-model_states.pt... 0: [2022-11-24 21:27:02,600] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/layer_26-model_00-model_states.pt. 0: [2022-11-24 21:27:02,601] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/layer_27-model_00-model_states.pt... 0: [2022-11-24 21:27:02,674] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/layer_27-model_00-model_states.pt. 0: [2022-11-24 21:27:02,674] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/layer_28-model_00-model_states.pt... 0: [2022-11-24 21:27:02,750] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/layer_28-model_00-model_states.pt. 0: [2022-11-24 21:27:02,750] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/layer_30-model_00-model_states.pt... 0: [2022-11-24 21:27:02,751] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/layer_30-model_00-model_states.pt. 0: [2022-11-24 21:27:02,752] [INFO] [logging.py:68:log_dist] [Rank 0] Saving model checkpoint: checkpoints_1b1/global_step2891/mp_rank_00_model_states.pt 0: [2022-11-24 21:27:02,752] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/mp_rank_00_model_states.pt... 0: [2022-11-24 21:27:02,755] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/mp_rank_00_model_states.pt. 0: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt... 0: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt... 0: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt... 0: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt... 0: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt... 0: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt... 0: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt... 0: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt... 1: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt... 1: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt... 1: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt... 1: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt... 1: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt... 1: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt... 6: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt... 6: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt... 6: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt... 6: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt... 6: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt... 6: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt... 3: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt... 3: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt... 3: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt... 3: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt... 3: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt... 3: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt... 7: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt... 7: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt... 7: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt... 7: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt... 7: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt... 7: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt... 5: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt... 5: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt... 5: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt... 5: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt... 5: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt... 5: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt... 4: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt... 4: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt... 4: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt... 4: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt... 4: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt... 4: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt... 1: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt... 1: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt... 6: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt... 3: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt... 2: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt... 2: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt... 2: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt... 2: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt... 2: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt... 2: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt... 7: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt... 7: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt... 5: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt... 4: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt... 4: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt... 6: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt... 3: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt... 2: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt... 2: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt... 5: [2022-11-24 21:27:02,777] [INFO] [torch_checkpoint_engine.py:15:save] [Torch] Saving checkpoints_1b1/global_step2891/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt... 6: [2022-11-24 21:27:03,016] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt. 6: [2022-11-24 21:27:03,016] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_50_mp_rank_00_optim_states.pt 6: [2022-11-24 21:27:03,016] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 0: [2022-11-24 21:27:03,020] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt. 1: [2022-11-24 21:27:03,022] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt. 1: [2022-11-24 21:27:03,022] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_14_mp_rank_00_optim_states.pt 1: [2022-11-24 21:27:03,022] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 4: [2022-11-24 21:27:03,023] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt. 4: [2022-11-24 21:27:03,023] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_38_mp_rank_00_optim_states.pt 4: [2022-11-24 21:27:03,023] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 3: [2022-11-24 21:27:03,029] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt. 3: [2022-11-24 21:27:03,029] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_31_mp_rank_00_optim_states.pt 3: [2022-11-24 21:27:03,030] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 0: [2022-11-24 21:27:03,030] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt. 0: [2022-11-24 21:27:03,030] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_6_mp_rank_00_optim_states.pt 0: [2022-11-24 21:27:03,030] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 0: [2022-11-24 21:27:03,032] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt. 0: [2022-11-24 21:27:03,032] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_2_mp_rank_00_optim_states.pt 0: [2022-11-24 21:27:03,032] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 0: [2022-11-24 21:27:03,038] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt. 0: [2022-11-24 21:27:03,038] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_3_mp_rank_00_optim_states.pt 0: [2022-11-24 21:27:03,038] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 1: [2022-11-24 21:27:03,038] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt. 1: [2022-11-24 21:27:03,038] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_8_mp_rank_00_optim_states.pt 1: [2022-11-24 21:27:03,038] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 1: [2022-11-24 21:27:03,042] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt. 1: [2022-11-24 21:27:03,042] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_13_mp_rank_00_optim_states.pt 1: [2022-11-24 21:27:03,042] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 3: [2022-11-24 21:27:03,043] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt. 3: [2022-11-24 21:27:03,043] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt. 3: [2022-11-24 21:27:03,043] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_29_mp_rank_00_optim_states.pt 3: [2022-11-24 21:27:03,043] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_27_mp_rank_00_optim_states.pt 3: [2022-11-24 21:27:03,043] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 3: [2022-11-24 21:27:03,043] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 2: [2022-11-24 21:27:03,028] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt. 5: [2022-11-24 21:27:03,022] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt. 2: [2022-11-24 21:27:03,028] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_21_mp_rank_00_optim_states.pt 5: [2022-11-24 21:27:03,022] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_41_mp_rank_00_optim_states.pt 2: [2022-11-24 21:27:03,028] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 5: [2022-11-24 21:27:03,022] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 2: [2022-11-24 21:27:03,038] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt. 2: [2022-11-24 21:27:03,038] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_17_mp_rank_00_optim_states.pt 2: [2022-11-24 21:27:03,038] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 2: [2022-11-24 21:27:03,039] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt. 2: [2022-11-24 21:27:03,039] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt. 2: [2022-11-24 21:27:03,039] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_20_mp_rank_00_optim_states.pt 2: [2022-11-24 21:27:03,039] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_19_mp_rank_00_optim_states.pt 2: [2022-11-24 21:27:03,039] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 2: [2022-11-24 21:27:03,039] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 2: [2022-11-24 21:27:03,046] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt. 2: [2022-11-24 21:27:03,046] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_22_mp_rank_00_optim_states.pt 2: [2022-11-24 21:27:03,046] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 1: [2022-11-24 21:27:03,051] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt. 1: [2022-11-24 21:27:03,051] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_15_mp_rank_00_optim_states.pt 1: [2022-11-24 21:27:03,051] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 4: [2022-11-24 21:27:03,054] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt. 4: [2022-11-24 21:27:03,054] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_37_mp_rank_00_optim_states.pt 4: [2022-11-24 21:27:03,054] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 4: [2022-11-24 21:27:03,054] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt. 4: [2022-11-24 21:27:03,054] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_33_mp_rank_00_optim_states.pt 4: [2022-11-24 21:27:03,054] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 4: [2022-11-24 21:27:03,054] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt. 4: [2022-11-24 21:27:03,055] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_35_mp_rank_00_optim_states.pt 4: [2022-11-24 21:27:03,055] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 4: [2022-11-24 21:27:03,055] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt. 4: [2022-11-24 21:27:03,055] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_39_mp_rank_00_optim_states.pt 4: [2022-11-24 21:27:03,055] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 4: [2022-11-24 21:27:03,056] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt. 4: [2022-11-24 21:27:03,056] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt. 4: [2022-11-24 21:27:03,056] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt. 4: [2022-11-24 21:27:03,056] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_34_mp_rank_00_optim_states.pt 4: [2022-11-24 21:27:03,056] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_32_mp_rank_00_optim_states.pt 4: [2022-11-24 21:27:03,056] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_36_mp_rank_00_optim_states.pt 4: [2022-11-24 21:27:03,056] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 4: [2022-11-24 21:27:03,056] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 4: [2022-11-24 21:27:03,056] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 0: [2022-11-24 21:27:03,056] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt. 0: [2022-11-24 21:27:03,056] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_1_mp_rank_00_optim_states.pt 0: [2022-11-24 21:27:03,056] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 1: [2022-11-24 21:27:03,057] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt. 1: [2022-11-24 21:27:03,057] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_9_mp_rank_00_optim_states.pt 1: [2022-11-24 21:27:03,057] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 0: [2022-11-24 21:27:03,063] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt. 0: [2022-11-24 21:27:03,063] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_7_mp_rank_00_optim_states.pt 0: [2022-11-24 21:27:03,063] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 0: [2022-11-24 21:27:03,063] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt. 0: [2022-11-24 21:27:03,063] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_4_mp_rank_00_optim_states.pt 0: [2022-11-24 21:27:03,063] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 6: [2022-11-24 21:27:03,068] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt. 6: [2022-11-24 21:27:03,068] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_55_mp_rank_00_optim_states.pt 6: [2022-11-24 21:27:03,068] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 3: [2022-11-24 21:27:03,068] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt. 3: [2022-11-24 21:27:03,068] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt. 3: [2022-11-24 21:27:03,068] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_24_mp_rank_00_optim_states.pt 3: [2022-11-24 21:27:03,068] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt. 3: [2022-11-24 21:27:03,068] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_26_mp_rank_00_optim_states.pt 3: [2022-11-24 21:27:03,068] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 3: [2022-11-24 21:27:03,068] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 3: [2022-11-24 21:27:03,068] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_28_mp_rank_00_optim_states.pt 3: [2022-11-24 21:27:03,068] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 1: [2022-11-24 21:27:03,069] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt. 1: [2022-11-24 21:27:03,069] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_10_mp_rank_00_optim_states.pt 1: [2022-11-24 21:27:03,069] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 1: [2022-11-24 21:27:03,070] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt. 1: [2022-11-24 21:27:03,070] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_11_mp_rank_00_optim_states.pt 1: [2022-11-24 21:27:03,070] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 7: [2022-11-24 21:27:03,072] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt. 7: [2022-11-24 21:27:03,073] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_56_mp_rank_00_optim_states.pt 7: [2022-11-24 21:27:03,073] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 3: [2022-11-24 21:27:03,077] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt. 3: [2022-11-24 21:27:03,077] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_25_mp_rank_00_optim_states.pt 3: [2022-11-24 21:27:03,077] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 3: [2022-11-24 21:27:03,077] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt. 3: [2022-11-24 21:27:03,078] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_30_mp_rank_00_optim_states.pt 3: [2022-11-24 21:27:03,078] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 0: [2022-11-24 21:27:03,087] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt. 0: [2022-11-24 21:27:03,087] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_5_mp_rank_00_optim_states.pt 0: [2022-11-24 21:27:03,087] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 5: [2022-11-24 21:27:03,054] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt. 2: [2022-11-24 21:27:03,055] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt. 2: [2022-11-24 21:27:03,055] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_18_mp_rank_00_optim_states.pt 5: [2022-11-24 21:27:03,054] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_44_mp_rank_00_optim_states.pt 2: [2022-11-24 21:27:03,056] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 5: [2022-11-24 21:27:03,055] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 2: [2022-11-24 21:27:03,056] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt. 2: [2022-11-24 21:27:03,056] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_16_mp_rank_00_optim_states.pt 2: [2022-11-24 21:27:03,056] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 2: [2022-11-24 21:27:03,064] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt. 2: [2022-11-24 21:27:03,065] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_23_mp_rank_00_optim_states.pt 2: [2022-11-24 21:27:03,065] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 5: [2022-11-24 21:27:03,102] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt. 5: [2022-11-24 21:27:03,102] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt. 5: [2022-11-24 21:27:03,102] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_40_mp_rank_00_optim_states.pt 5: [2022-11-24 21:27:03,102] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_43_mp_rank_00_optim_states.pt 5: [2022-11-24 21:27:03,102] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 5: [2022-11-24 21:27:03,102] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 1: [2022-11-24 21:27:03,108] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt. 1: [2022-11-24 21:27:03,108] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_12_mp_rank_00_optim_states.pt 1: [2022-11-24 21:27:03,109] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 7: [2022-11-24 21:27:03,110] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt. 7: [2022-11-24 21:27:03,110] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_60_mp_rank_00_optim_states.pt 7: [2022-11-24 21:27:03,110] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 6: [2022-11-24 21:27:03,149] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt. 6: [2022-11-24 21:27:03,149] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_51_mp_rank_00_optim_states.pt 6: [2022-11-24 21:27:03,149] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 0: [2022-11-24 21:27:03,153] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_0_mp_rank_00_optim_states.pt 0: [2022-11-24 21:27:03,153] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 7: [2022-11-24 21:27:03,170] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt. 7: [2022-11-24 21:27:03,170] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_58_mp_rank_00_optim_states.pt 7: [2022-11-24 21:27:03,170] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 7: [2022-11-24 21:27:03,176] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt. 7: [2022-11-24 21:27:03,176] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_57_mp_rank_00_optim_states.pt 7: [2022-11-24 21:27:03,176] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 6: [2022-11-24 21:27:03,187] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt. 6: [2022-11-24 21:27:03,188] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_53_mp_rank_00_optim_states.pt 6: [2022-11-24 21:27:03,188] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 7: [2022-11-24 21:27:03,191] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt. 7: [2022-11-24 21:27:03,191] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_61_mp_rank_00_optim_states.pt 7: [2022-11-24 21:27:03,191] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 5: [2022-11-24 21:27:03,188] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt. 5: [2022-11-24 21:27:03,188] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_42_mp_rank_00_optim_states.pt 5: [2022-11-24 21:27:03,188] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 6: [2022-11-24 21:27:03,224] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt. 6: [2022-11-24 21:27:03,224] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_52_mp_rank_00_optim_states.pt 6: [2022-11-24 21:27:03,224] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 5: [2022-11-24 21:27:03,219] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt. 5: [2022-11-24 21:27:03,219] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_46_mp_rank_00_optim_states.pt 5: [2022-11-24 21:27:03,219] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 7: [2022-11-24 21:27:03,232] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt. 7: [2022-11-24 21:27:03,232] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_59_mp_rank_00_optim_states.pt 7: [2022-11-24 21:27:03,232] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 6: [2022-11-24 21:27:03,236] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt. 6: [2022-11-24 21:27:03,236] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_49_mp_rank_00_optim_states.pt 6: [2022-11-24 21:27:03,237] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 7: [2022-11-24 21:27:03,291] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt. 7: [2022-11-24 21:27:03,291] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_63_mp_rank_00_optim_states.pt 7: [2022-11-24 21:27:03,291] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 6: [2022-11-24 21:27:03,305] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt. 6: [2022-11-24 21:27:03,306] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_48_mp_rank_00_optim_states.pt 6: [2022-11-24 21:27:03,306] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 5: [2022-11-24 21:27:03,323] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt. 5: [2022-11-24 21:27:03,323] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_45_mp_rank_00_optim_states.pt 5: [2022-11-24 21:27:03,324] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 6: [2022-11-24 21:27:03,359] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt. 6: [2022-11-24 21:27:03,359] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_54_mp_rank_00_optim_states.pt 6: [2022-11-24 21:27:03,359] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 7: [2022-11-24 21:27:03,370] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt. 7: [2022-11-24 21:27:03,370] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_62_mp_rank_00_optim_states.pt 7: [2022-11-24 21:27:03,370] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 5: [2022-11-24 21:27:03,374] [INFO] [torch_checkpoint_engine.py:17:save] [Torch] Saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt. 5: [2022-11-24 21:27:03,374] [INFO] [engine.py:3213:_save_zero_checkpoint] bf16_zero checkpoint saved checkpoints_1b1/global_step2891/bf16_zero_pp_rank_47_mp_rank_00_optim_states.pt 5: [2022-11-24 21:27:03,374] [INFO] [torch_checkpoint_engine.py:27:commit] [Torch] Checkpoint global_step2891 is ready now! 0: successfully saved checkpoint at iteration 2891 to checkpoints_1b1 7: ------------------------------------------------------------------------------------------------------------ 7: test loss at the end of training for test data | lm loss value: 2.712490E+00 | lm loss PPL: 1.506675E+01 | 7: ------------------------------------------------------------------------------------------------------------ END 2068467: Thu Nov 24 21:27:07 EET 2022