|
[2024-11-10 09:41:58,396][00444] Saving configuration to /content/train_dir/default_experiment/config.json... |
|
[2024-11-10 09:41:58,401][00444] Rollout worker 0 uses device cpu |
|
[2024-11-10 09:41:58,402][00444] Rollout worker 1 uses device cpu |
|
[2024-11-10 09:41:58,403][00444] Rollout worker 2 uses device cpu |
|
[2024-11-10 09:41:58,405][00444] Rollout worker 3 uses device cpu |
|
[2024-11-10 09:41:58,406][00444] Rollout worker 4 uses device cpu |
|
[2024-11-10 09:41:58,407][00444] Rollout worker 5 uses device cpu |
|
[2024-11-10 09:41:58,408][00444] Rollout worker 6 uses device cpu |
|
[2024-11-10 09:41:58,410][00444] Rollout worker 7 uses device cpu |
|
[2024-11-10 09:41:58,558][00444] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-11-10 09:41:58,560][00444] InferenceWorker_p0-w0: min num requests: 2 |
|
[2024-11-10 09:41:58,593][00444] Starting all processes... |
|
[2024-11-10 09:41:58,595][00444] Starting process learner_proc0 |
|
[2024-11-10 09:41:58,643][00444] Starting all processes... |
|
[2024-11-10 09:41:58,653][00444] Starting process inference_proc0-0 |
|
[2024-11-10 09:41:58,653][00444] Starting process rollout_proc0 |
|
[2024-11-10 09:41:58,655][00444] Starting process rollout_proc1 |
|
[2024-11-10 09:41:58,655][00444] Starting process rollout_proc2 |
|
[2024-11-10 09:41:58,655][00444] Starting process rollout_proc3 |
|
[2024-11-10 09:41:58,655][00444] Starting process rollout_proc4 |
|
[2024-11-10 09:41:58,655][00444] Starting process rollout_proc5 |
|
[2024-11-10 09:41:58,655][00444] Starting process rollout_proc6 |
|
[2024-11-10 09:41:58,655][00444] Starting process rollout_proc7 |
|
[2024-11-10 09:42:09,888][04576] Worker 4 uses CPU cores [0] |
|
[2024-11-10 09:42:10,048][04578] Worker 6 uses CPU cores [0] |
|
[2024-11-10 09:42:10,053][04573] Worker 1 uses CPU cores [1] |
|
[2024-11-10 09:42:10,125][04571] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-11-10 09:42:10,127][04571] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for inference process 0 |
|
[2024-11-10 09:42:10,155][04574] Worker 3 uses CPU cores [1] |
|
[2024-11-10 09:42:10,189][04571] Num visible devices: 1 |
|
[2024-11-10 09:42:10,190][04558] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-11-10 09:42:10,194][04558] Set environment var CUDA_VISIBLE_DEVICES to '0' (GPU indices [0]) for learning process 0 |
|
[2024-11-10 09:42:10,233][04575] Worker 2 uses CPU cores [0] |
|
[2024-11-10 09:42:10,239][04558] Num visible devices: 1 |
|
[2024-11-10 09:42:10,243][04572] Worker 0 uses CPU cores [0] |
|
[2024-11-10 09:42:10,247][04577] Worker 5 uses CPU cores [1] |
|
[2024-11-10 09:42:10,249][04558] Starting seed is not provided |
|
[2024-11-10 09:42:10,249][04558] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-11-10 09:42:10,249][04558] Initializing actor-critic model on device cuda:0 |
|
[2024-11-10 09:42:10,250][04558] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-11-10 09:42:10,250][04579] Worker 7 uses CPU cores [1] |
|
[2024-11-10 09:42:10,251][04558] RunningMeanStd input shape: (1,) |
|
[2024-11-10 09:42:10,266][04558] ConvEncoder: input_channels=3 |
|
[2024-11-10 09:42:10,417][04558] Conv encoder output size: 512 |
|
[2024-11-10 09:42:10,418][04558] Policy head output size: 512 |
|
[2024-11-10 09:42:10,432][04558] Created Actor Critic model with architecture: |
|
[2024-11-10 09:42:10,433][04558] ActorCriticSharedWeights( |
|
(obs_normalizer): ObservationNormalizer( |
|
(running_mean_std): RunningMeanStdDictInPlace( |
|
(running_mean_std): ModuleDict( |
|
(obs): RunningMeanStdInPlace() |
|
) |
|
) |
|
) |
|
(returns_normalizer): RecursiveScriptModule(original_name=RunningMeanStdInPlace) |
|
(encoder): VizdoomEncoder( |
|
(basic_encoder): ConvEncoder( |
|
(enc): RecursiveScriptModule( |
|
original_name=ConvEncoderImpl |
|
(conv_head): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Conv2d) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
(2): RecursiveScriptModule(original_name=Conv2d) |
|
(3): RecursiveScriptModule(original_name=ELU) |
|
(4): RecursiveScriptModule(original_name=Conv2d) |
|
(5): RecursiveScriptModule(original_name=ELU) |
|
) |
|
(mlp_layers): RecursiveScriptModule( |
|
original_name=Sequential |
|
(0): RecursiveScriptModule(original_name=Linear) |
|
(1): RecursiveScriptModule(original_name=ELU) |
|
) |
|
) |
|
) |
|
) |
|
(core): ModelCoreRNN( |
|
(core): GRU(512, 512) |
|
) |
|
(decoder): MlpDecoder( |
|
(mlp): Identity() |
|
) |
|
(critic_linear): Linear(in_features=512, out_features=1, bias=True) |
|
(action_parameterization): ActionParameterizationDefault( |
|
(distribution_linear): Linear(in_features=512, out_features=5, bias=True) |
|
) |
|
) |
|
[2024-11-10 09:42:14,487][04558] Using optimizer <class 'torch.optim.adam.Adam'> |
|
[2024-11-10 09:42:14,488][04558] No checkpoints found |
|
[2024-11-10 09:42:14,488][04558] Did not load from checkpoint, starting from scratch! |
|
[2024-11-10 09:42:14,489][04558] Initialized policy 0 weights for model version 0 |
|
[2024-11-10 09:42:14,491][04558] LearnerWorker_p0 finished initialization! |
|
[2024-11-10 09:42:14,492][04558] Using GPUs [0] for process 0 (actually maps to GPUs [0]) |
|
[2024-11-10 09:42:14,732][04571] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-11-10 09:42:14,734][04571] RunningMeanStd input shape: (1,) |
|
[2024-11-10 09:42:14,750][04571] ConvEncoder: input_channels=3 |
|
[2024-11-10 09:42:14,851][04571] Conv encoder output size: 512 |
|
[2024-11-10 09:42:14,852][04571] Policy head output size: 512 |
|
[2024-11-10 09:42:15,018][00444] Fps is (10 sec: nan, 60 sec: nan, 300 sec: nan). Total num frames: 0. Throughput: 0: nan. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-11-10 09:42:16,376][00444] Inference worker 0-0 is ready! |
|
[2024-11-10 09:42:16,377][00444] All inference workers are ready! Signal rollout workers to start! |
|
[2024-11-10 09:42:16,502][04573] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-11-10 09:42:16,515][04576] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-11-10 09:42:16,526][04577] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-11-10 09:42:16,528][04578] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-11-10 09:42:16,531][04575] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-11-10 09:42:16,536][04574] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-11-10 09:42:16,537][04579] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-11-10 09:42:16,542][04572] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-11-10 09:42:17,684][04572] Decorrelating experience for 0 frames... |
|
[2024-11-10 09:42:17,684][04573] Decorrelating experience for 0 frames... |
|
[2024-11-10 09:42:17,685][04574] Decorrelating experience for 0 frames... |
|
[2024-11-10 09:42:18,287][04572] Decorrelating experience for 32 frames... |
|
[2024-11-10 09:42:18,550][00444] Heartbeat connected on Batcher_0 |
|
[2024-11-10 09:42:18,554][00444] Heartbeat connected on LearnerWorker_p0 |
|
[2024-11-10 09:42:18,606][00444] Heartbeat connected on InferenceWorker_p0-w0 |
|
[2024-11-10 09:42:18,806][04574] Decorrelating experience for 32 frames... |
|
[2024-11-10 09:42:18,809][04573] Decorrelating experience for 32 frames... |
|
[2024-11-10 09:42:18,970][04572] Decorrelating experience for 64 frames... |
|
[2024-11-10 09:42:19,727][04572] Decorrelating experience for 96 frames... |
|
[2024-11-10 09:42:19,836][04574] Decorrelating experience for 64 frames... |
|
[2024-11-10 09:42:19,871][00444] Heartbeat connected on RolloutWorker_w0 |
|
[2024-11-10 09:42:20,012][04573] Decorrelating experience for 64 frames... |
|
[2024-11-10 09:42:20,018][00444] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 0.0. Samples: 0. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-11-10 09:42:20,806][04574] Decorrelating experience for 96 frames... |
|
[2024-11-10 09:42:20,953][00444] Heartbeat connected on RolloutWorker_w3 |
|
[2024-11-10 09:42:20,971][04573] Decorrelating experience for 96 frames... |
|
[2024-11-10 09:42:21,095][00444] Heartbeat connected on RolloutWorker_w1 |
|
[2024-11-10 09:42:25,018][00444] Fps is (10 sec: 0.0, 60 sec: 0.0, 300 sec: 0.0). Total num frames: 0. Throughput: 0: 1.6. Samples: 16. Policy #0 lag: (min: -1.0, avg: -1.0, max: -1.0) |
|
[2024-11-10 09:42:25,026][00444] Avg episode reward: [(0, '2.992')] |
|
[2024-11-10 09:42:26,038][04558] Signal inference workers to stop experience collection... |
|
[2024-11-10 09:42:26,052][04571] InferenceWorker_p0-w0: stopping experience collection |
|
[2024-11-10 09:42:27,518][04558] Signal inference workers to resume experience collection... |
|
[2024-11-10 09:42:27,520][04571] InferenceWorker_p0-w0: resuming experience collection |
|
[2024-11-10 09:42:30,018][00444] Fps is (10 sec: 1228.8, 60 sec: 819.2, 300 sec: 819.2). Total num frames: 12288. Throughput: 0: 168.5. Samples: 2528. Policy #0 lag: (min: 0.0, avg: 0.0, max: 0.0) |
|
[2024-11-10 09:42:30,024][00444] Avg episode reward: [(0, '3.827')] |
|
[2024-11-10 09:42:35,019][00444] Fps is (10 sec: 2866.8, 60 sec: 1433.5, 300 sec: 1433.5). Total num frames: 28672. Throughput: 0: 371.0. Samples: 7420. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-11-10 09:42:35,025][00444] Avg episode reward: [(0, '4.319')] |
|
[2024-11-10 09:42:38,242][04571] Updated weights for policy 0, policy_version 10 (0.0356) |
|
[2024-11-10 09:42:40,018][00444] Fps is (10 sec: 3276.8, 60 sec: 1802.2, 300 sec: 1802.2). Total num frames: 45056. Throughput: 0: 382.5. Samples: 9562. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:42:40,021][00444] Avg episode reward: [(0, '4.659')] |
|
[2024-11-10 09:42:45,018][00444] Fps is (10 sec: 3686.8, 60 sec: 2184.5, 300 sec: 2184.5). Total num frames: 65536. Throughput: 0: 522.9. Samples: 15688. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-11-10 09:42:45,021][00444] Avg episode reward: [(0, '4.511')] |
|
[2024-11-10 09:42:49,102][04571] Updated weights for policy 0, policy_version 20 (0.0013) |
|
[2024-11-10 09:42:50,018][00444] Fps is (10 sec: 3686.4, 60 sec: 2340.6, 300 sec: 2340.6). Total num frames: 81920. Throughput: 0: 593.2. Samples: 20762. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:42:50,022][00444] Avg episode reward: [(0, '4.518')] |
|
[2024-11-10 09:42:55,018][00444] Fps is (10 sec: 3276.8, 60 sec: 2457.6, 300 sec: 2457.6). Total num frames: 98304. Throughput: 0: 577.4. Samples: 23094. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:42:55,021][00444] Avg episode reward: [(0, '4.473')] |
|
[2024-11-10 09:43:00,018][00444] Fps is (10 sec: 3686.4, 60 sec: 2639.6, 300 sec: 2639.6). Total num frames: 118784. Throughput: 0: 648.2. Samples: 29168. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-11-10 09:43:00,021][00444] Avg episode reward: [(0, '4.492')] |
|
[2024-11-10 09:43:00,035][04571] Updated weights for policy 0, policy_version 30 (0.0012) |
|
[2024-11-10 09:43:00,039][04558] Saving new best policy, reward=4.492! |
|
[2024-11-10 09:43:05,021][00444] Fps is (10 sec: 3685.2, 60 sec: 2703.2, 300 sec: 2703.2). Total num frames: 135168. Throughput: 0: 755.8. Samples: 34012. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:43:05,024][00444] Avg episode reward: [(0, '4.525')] |
|
[2024-11-10 09:43:05,037][04558] Saving new best policy, reward=4.525! |
|
[2024-11-10 09:43:10,018][00444] Fps is (10 sec: 3276.8, 60 sec: 2755.5, 300 sec: 2755.5). Total num frames: 151552. Throughput: 0: 807.8. Samples: 36366. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:43:10,020][00444] Avg episode reward: [(0, '4.532')] |
|
[2024-11-10 09:43:10,102][04558] Saving new best policy, reward=4.532! |
|
[2024-11-10 09:43:12,075][04571] Updated weights for policy 0, policy_version 40 (0.0013) |
|
[2024-11-10 09:43:15,020][00444] Fps is (10 sec: 3686.8, 60 sec: 2867.1, 300 sec: 2867.1). Total num frames: 172032. Throughput: 0: 886.1. Samples: 42404. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:43:15,026][00444] Avg episode reward: [(0, '4.572')] |
|
[2024-11-10 09:43:15,035][04558] Saving new best policy, reward=4.572! |
|
[2024-11-10 09:43:20,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3140.3, 300 sec: 2898.7). Total num frames: 188416. Throughput: 0: 880.6. Samples: 47046. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:43:20,023][00444] Avg episode reward: [(0, '4.459')] |
|
[2024-11-10 09:43:24,259][04571] Updated weights for policy 0, policy_version 50 (0.0017) |
|
[2024-11-10 09:43:25,018][00444] Fps is (10 sec: 3277.5, 60 sec: 3413.3, 300 sec: 2925.7). Total num frames: 204800. Throughput: 0: 888.1. Samples: 49528. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:43:25,021][00444] Avg episode reward: [(0, '4.313')] |
|
[2024-11-10 09:43:30,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3003.7). Total num frames: 225280. Throughput: 0: 885.4. Samples: 55530. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:43:30,021][00444] Avg episode reward: [(0, '4.321')] |
|
[2024-11-10 09:43:35,020][00444] Fps is (10 sec: 3685.6, 60 sec: 3549.8, 300 sec: 3020.7). Total num frames: 241664. Throughput: 0: 877.6. Samples: 60256. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:43:35,023][00444] Avg episode reward: [(0, '4.506')] |
|
[2024-11-10 09:43:36,145][04571] Updated weights for policy 0, policy_version 60 (0.0017) |
|
[2024-11-10 09:43:40,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3035.9). Total num frames: 258048. Throughput: 0: 884.4. Samples: 62894. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:43:40,023][00444] Avg episode reward: [(0, '4.523')] |
|
[2024-11-10 09:43:45,018][00444] Fps is (10 sec: 3687.2, 60 sec: 3549.9, 300 sec: 3094.8). Total num frames: 278528. Throughput: 0: 886.1. Samples: 69042. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:43:45,021][00444] Avg episode reward: [(0, '4.637')] |
|
[2024-11-10 09:43:45,035][04558] Saving new best policy, reward=4.637! |
|
[2024-11-10 09:43:46,609][04571] Updated weights for policy 0, policy_version 70 (0.0012) |
|
[2024-11-10 09:43:50,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3104.3). Total num frames: 294912. Throughput: 0: 880.6. Samples: 73638. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:43:50,028][00444] Avg episode reward: [(0, '4.769')] |
|
[2024-11-10 09:43:50,032][04558] Saving new best policy, reward=4.769! |
|
[2024-11-10 09:43:55,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3113.0). Total num frames: 311296. Throughput: 0: 886.1. Samples: 76242. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-11-10 09:43:55,020][00444] Avg episode reward: [(0, '4.805')] |
|
[2024-11-10 09:43:55,046][04558] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000077_315392.pth... |
|
[2024-11-10 09:43:55,151][04558] Saving new best policy, reward=4.805! |
|
[2024-11-10 09:43:58,387][04571] Updated weights for policy 0, policy_version 80 (0.0012) |
|
[2024-11-10 09:44:00,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3159.8). Total num frames: 331776. Throughput: 0: 883.3. Samples: 82150. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:44:00,021][00444] Avg episode reward: [(0, '4.710')] |
|
[2024-11-10 09:44:05,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3481.8, 300 sec: 3127.9). Total num frames: 344064. Throughput: 0: 879.3. Samples: 86616. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:44:05,023][00444] Avg episode reward: [(0, '4.407')] |
|
[2024-11-10 09:44:10,019][00444] Fps is (10 sec: 3276.4, 60 sec: 3549.8, 300 sec: 3169.9). Total num frames: 364544. Throughput: 0: 884.6. Samples: 89334. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:44:10,021][00444] Avg episode reward: [(0, '4.488')] |
|
[2024-11-10 09:44:10,292][04571] Updated weights for policy 0, policy_version 90 (0.0011) |
|
[2024-11-10 09:44:15,018][00444] Fps is (10 sec: 4096.0, 60 sec: 3550.0, 300 sec: 3208.5). Total num frames: 385024. Throughput: 0: 888.8. Samples: 95524. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:44:15,028][00444] Avg episode reward: [(0, '4.349')] |
|
[2024-11-10 09:44:20,018][00444] Fps is (10 sec: 3277.1, 60 sec: 3481.6, 300 sec: 3178.5). Total num frames: 397312. Throughput: 0: 881.8. Samples: 99934. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:44:20,021][00444] Avg episode reward: [(0, '4.285')] |
|
[2024-11-10 09:44:22,309][04571] Updated weights for policy 0, policy_version 100 (0.0015) |
|
[2024-11-10 09:44:25,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3213.8). Total num frames: 417792. Throughput: 0: 885.8. Samples: 102756. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-11-10 09:44:25,021][00444] Avg episode reward: [(0, '4.345')] |
|
[2024-11-10 09:44:30,021][00444] Fps is (10 sec: 4094.7, 60 sec: 3549.7, 300 sec: 3246.4). Total num frames: 438272. Throughput: 0: 883.4. Samples: 108796. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:44:30,025][00444] Avg episode reward: [(0, '4.400')] |
|
[2024-11-10 09:44:33,779][04571] Updated weights for policy 0, policy_version 110 (0.0012) |
|
[2024-11-10 09:44:35,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3481.7, 300 sec: 3218.3). Total num frames: 450560. Throughput: 0: 876.6. Samples: 113084. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:44:35,021][00444] Avg episode reward: [(0, '4.492')] |
|
[2024-11-10 09:44:40,018][00444] Fps is (10 sec: 3277.8, 60 sec: 3549.9, 300 sec: 3248.6). Total num frames: 471040. Throughput: 0: 886.2. Samples: 116122. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:44:40,021][00444] Avg episode reward: [(0, '4.468')] |
|
[2024-11-10 09:44:44,258][04571] Updated weights for policy 0, policy_version 120 (0.0014) |
|
[2024-11-10 09:44:45,018][00444] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3276.8). Total num frames: 491520. Throughput: 0: 888.3. Samples: 122122. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:44:45,024][00444] Avg episode reward: [(0, '4.628')] |
|
[2024-11-10 09:44:50,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3250.4). Total num frames: 503808. Throughput: 0: 882.1. Samples: 126310. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:44:50,021][00444] Avg episode reward: [(0, '4.513')] |
|
[2024-11-10 09:44:55,018][00444] Fps is (10 sec: 3276.7, 60 sec: 3549.9, 300 sec: 3276.8). Total num frames: 524288. Throughput: 0: 889.4. Samples: 129356. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:44:55,025][00444] Avg episode reward: [(0, '4.532')] |
|
[2024-11-10 09:44:56,664][04571] Updated weights for policy 0, policy_version 130 (0.0016) |
|
[2024-11-10 09:45:00,018][00444] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3301.6). Total num frames: 544768. Throughput: 0: 878.8. Samples: 135072. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:45:00,024][00444] Avg episode reward: [(0, '4.523')] |
|
[2024-11-10 09:45:05,018][00444] Fps is (10 sec: 3276.9, 60 sec: 3549.9, 300 sec: 3276.8). Total num frames: 557056. Throughput: 0: 873.6. Samples: 139246. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:45:05,025][00444] Avg episode reward: [(0, '4.564')] |
|
[2024-11-10 09:45:08,773][04571] Updated weights for policy 0, policy_version 140 (0.0012) |
|
[2024-11-10 09:45:10,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3300.2). Total num frames: 577536. Throughput: 0: 875.4. Samples: 142150. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:45:10,023][00444] Avg episode reward: [(0, '4.480')] |
|
[2024-11-10 09:45:15,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3299.6). Total num frames: 593920. Throughput: 0: 873.1. Samples: 148082. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:45:15,023][00444] Avg episode reward: [(0, '4.273')] |
|
[2024-11-10 09:45:20,021][00444] Fps is (10 sec: 3276.0, 60 sec: 3549.7, 300 sec: 3298.9). Total num frames: 610304. Throughput: 0: 872.4. Samples: 152346. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:45:20,023][00444] Avg episode reward: [(0, '4.318')] |
|
[2024-11-10 09:45:20,828][04571] Updated weights for policy 0, policy_version 150 (0.0013) |
|
[2024-11-10 09:45:25,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3319.9). Total num frames: 630784. Throughput: 0: 872.2. Samples: 155372. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:45:25,021][00444] Avg episode reward: [(0, '4.382')] |
|
[2024-11-10 09:45:30,019][00444] Fps is (10 sec: 3686.9, 60 sec: 3481.7, 300 sec: 3318.8). Total num frames: 647168. Throughput: 0: 868.7. Samples: 161214. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:45:30,028][00444] Avg episode reward: [(0, '4.401')] |
|
[2024-11-10 09:45:32,628][04571] Updated weights for policy 0, policy_version 160 (0.0012) |
|
[2024-11-10 09:45:35,018][00444] Fps is (10 sec: 2867.2, 60 sec: 3481.6, 300 sec: 3297.3). Total num frames: 659456. Throughput: 0: 867.0. Samples: 165326. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:45:35,020][00444] Avg episode reward: [(0, '4.579')] |
|
[2024-11-10 09:45:40,019][00444] Fps is (10 sec: 3276.8, 60 sec: 3481.5, 300 sec: 3316.7). Total num frames: 679936. Throughput: 0: 864.0. Samples: 168236. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:45:40,022][00444] Avg episode reward: [(0, '4.628')] |
|
[2024-11-10 09:45:43,509][04571] Updated weights for policy 0, policy_version 170 (0.0012) |
|
[2024-11-10 09:45:45,018][00444] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3335.3). Total num frames: 700416. Throughput: 0: 869.7. Samples: 174210. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:45:45,024][00444] Avg episode reward: [(0, '4.498')] |
|
[2024-11-10 09:45:50,018][00444] Fps is (10 sec: 3277.2, 60 sec: 3481.6, 300 sec: 3314.9). Total num frames: 712704. Throughput: 0: 867.8. Samples: 178296. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:45:50,023][00444] Avg episode reward: [(0, '4.463')] |
|
[2024-11-10 09:45:55,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3332.7). Total num frames: 733184. Throughput: 0: 869.8. Samples: 181290. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:45:55,023][00444] Avg episode reward: [(0, '4.552')] |
|
[2024-11-10 09:45:55,032][04558] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000179_733184.pth... |
|
[2024-11-10 09:45:55,674][04571] Updated weights for policy 0, policy_version 180 (0.0025) |
|
[2024-11-10 09:46:00,018][00444] Fps is (10 sec: 3686.3, 60 sec: 3413.3, 300 sec: 3331.4). Total num frames: 749568. Throughput: 0: 870.6. Samples: 187260. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:46:00,023][00444] Avg episode reward: [(0, '4.596')] |
|
[2024-11-10 09:46:05,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3330.2). Total num frames: 765952. Throughput: 0: 872.3. Samples: 191598. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:46:05,024][00444] Avg episode reward: [(0, '4.602')] |
|
[2024-11-10 09:46:07,530][04571] Updated weights for policy 0, policy_version 190 (0.0013) |
|
[2024-11-10 09:46:10,018][00444] Fps is (10 sec: 3686.5, 60 sec: 3481.6, 300 sec: 3346.5). Total num frames: 786432. Throughput: 0: 872.0. Samples: 194612. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-11-10 09:46:10,025][00444] Avg episode reward: [(0, '4.599')] |
|
[2024-11-10 09:46:15,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3345.1). Total num frames: 802816. Throughput: 0: 874.5. Samples: 200564. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:46:15,021][00444] Avg episode reward: [(0, '4.553')] |
|
[2024-11-10 09:46:19,455][04571] Updated weights for policy 0, policy_version 200 (0.0012) |
|
[2024-11-10 09:46:20,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3481.7, 300 sec: 3343.7). Total num frames: 819200. Throughput: 0: 882.5. Samples: 205040. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:46:20,025][00444] Avg episode reward: [(0, '4.595')] |
|
[2024-11-10 09:46:25,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3481.6, 300 sec: 3358.7). Total num frames: 839680. Throughput: 0: 885.5. Samples: 208082. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:46:25,020][00444] Avg episode reward: [(0, '4.601')] |
|
[2024-11-10 09:46:30,018][00444] Fps is (10 sec: 3686.3, 60 sec: 3481.7, 300 sec: 3357.1). Total num frames: 856064. Throughput: 0: 883.0. Samples: 213944. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:46:30,025][00444] Avg episode reward: [(0, '4.645')] |
|
[2024-11-10 09:46:30,414][04571] Updated weights for policy 0, policy_version 210 (0.0012) |
|
[2024-11-10 09:46:35,018][00444] Fps is (10 sec: 3276.7, 60 sec: 3549.9, 300 sec: 3355.6). Total num frames: 872448. Throughput: 0: 892.8. Samples: 218470. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:46:35,020][00444] Avg episode reward: [(0, '4.795')] |
|
[2024-11-10 09:46:40,018][00444] Fps is (10 sec: 3686.5, 60 sec: 3549.9, 300 sec: 3369.5). Total num frames: 892928. Throughput: 0: 894.5. Samples: 221542. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:46:40,021][00444] Avg episode reward: [(0, '4.752')] |
|
[2024-11-10 09:46:41,248][04571] Updated weights for policy 0, policy_version 220 (0.0013) |
|
[2024-11-10 09:46:45,018][00444] Fps is (10 sec: 3686.5, 60 sec: 3481.6, 300 sec: 3367.8). Total num frames: 909312. Throughput: 0: 890.0. Samples: 227308. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:46:45,022][00444] Avg episode reward: [(0, '4.501')] |
|
[2024-11-10 09:46:50,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3366.2). Total num frames: 925696. Throughput: 0: 897.6. Samples: 231992. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:46:50,024][00444] Avg episode reward: [(0, '4.710')] |
|
[2024-11-10 09:46:53,098][04571] Updated weights for policy 0, policy_version 230 (0.0015) |
|
[2024-11-10 09:46:55,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3379.2). Total num frames: 946176. Throughput: 0: 898.6. Samples: 235050. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:46:55,022][00444] Avg episode reward: [(0, '4.844')] |
|
[2024-11-10 09:46:55,080][04558] Saving new best policy, reward=4.844! |
|
[2024-11-10 09:47:00,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3377.4). Total num frames: 962560. Throughput: 0: 887.9. Samples: 240518. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:47:00,025][00444] Avg episode reward: [(0, '4.879')] |
|
[2024-11-10 09:47:00,031][04558] Saving new best policy, reward=4.879! |
|
[2024-11-10 09:47:05,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3375.7). Total num frames: 978944. Throughput: 0: 891.7. Samples: 245168. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:47:05,022][00444] Avg episode reward: [(0, '4.921')] |
|
[2024-11-10 09:47:05,032][04558] Saving new best policy, reward=4.921! |
|
[2024-11-10 09:47:05,251][04571] Updated weights for policy 0, policy_version 240 (0.0013) |
|
[2024-11-10 09:47:10,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3387.9). Total num frames: 999424. Throughput: 0: 889.2. Samples: 248096. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:47:10,024][00444] Avg episode reward: [(0, '4.884')] |
|
[2024-11-10 09:47:15,022][00444] Fps is (10 sec: 3685.0, 60 sec: 3549.6, 300 sec: 3443.4). Total num frames: 1015808. Throughput: 0: 884.5. Samples: 253752. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:47:15,028][00444] Avg episode reward: [(0, '4.784')] |
|
[2024-11-10 09:47:17,077][04571] Updated weights for policy 0, policy_version 250 (0.0014) |
|
[2024-11-10 09:47:20,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3512.8). Total num frames: 1036288. Throughput: 0: 892.1. Samples: 258616. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:47:20,020][00444] Avg episode reward: [(0, '5.043')] |
|
[2024-11-10 09:47:20,022][04558] Saving new best policy, reward=5.043! |
|
[2024-11-10 09:47:25,018][00444] Fps is (10 sec: 3687.9, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 1052672. Throughput: 0: 890.7. Samples: 261622. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:47:25,027][00444] Avg episode reward: [(0, '5.282')] |
|
[2024-11-10 09:47:25,043][04558] Saving new best policy, reward=5.282! |
|
[2024-11-10 09:47:27,325][04571] Updated weights for policy 0, policy_version 260 (0.0012) |
|
[2024-11-10 09:47:30,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 1069056. Throughput: 0: 883.6. Samples: 267068. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:47:30,022][00444] Avg episode reward: [(0, '5.131')] |
|
[2024-11-10 09:47:35,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 1085440. Throughput: 0: 887.9. Samples: 271948. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:47:35,025][00444] Avg episode reward: [(0, '5.049')] |
|
[2024-11-10 09:47:38,922][04571] Updated weights for policy 0, policy_version 270 (0.0015) |
|
[2024-11-10 09:47:40,018][00444] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3540.6). Total num frames: 1110016. Throughput: 0: 888.2. Samples: 275018. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:47:40,025][00444] Avg episode reward: [(0, '5.247')] |
|
[2024-11-10 09:47:45,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 1122304. Throughput: 0: 888.5. Samples: 280502. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:47:45,020][00444] Avg episode reward: [(0, '5.374')] |
|
[2024-11-10 09:47:45,030][04558] Saving new best policy, reward=5.374! |
|
[2024-11-10 09:47:50,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3540.6). Total num frames: 1142784. Throughput: 0: 895.3. Samples: 285456. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-11-10 09:47:50,024][00444] Avg episode reward: [(0, '5.122')] |
|
[2024-11-10 09:47:50,888][04571] Updated weights for policy 0, policy_version 280 (0.0013) |
|
[2024-11-10 09:47:55,018][00444] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3540.6). Total num frames: 1163264. Throughput: 0: 897.9. Samples: 288500. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:47:55,026][00444] Avg episode reward: [(0, '5.044')] |
|
[2024-11-10 09:47:55,035][04558] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000284_1163264.pth... |
|
[2024-11-10 09:47:55,135][04558] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000077_315392.pth |
|
[2024-11-10 09:48:00,020][00444] Fps is (10 sec: 3276.1, 60 sec: 3549.8, 300 sec: 3526.7). Total num frames: 1175552. Throughput: 0: 887.2. Samples: 293676. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:48:00,023][00444] Avg episode reward: [(0, '5.246')] |
|
[2024-11-10 09:48:02,868][04571] Updated weights for policy 0, policy_version 290 (0.0016) |
|
[2024-11-10 09:48:05,021][00444] Fps is (10 sec: 3276.0, 60 sec: 3618.0, 300 sec: 3540.6). Total num frames: 1196032. Throughput: 0: 891.2. Samples: 298722. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:48:05,022][00444] Avg episode reward: [(0, '5.228')] |
|
[2024-11-10 09:48:10,018][00444] Fps is (10 sec: 4096.8, 60 sec: 3618.1, 300 sec: 3540.6). Total num frames: 1216512. Throughput: 0: 892.0. Samples: 301764. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:48:10,020][00444] Avg episode reward: [(0, '5.219')] |
|
[2024-11-10 09:48:14,084][04571] Updated weights for policy 0, policy_version 300 (0.0012) |
|
[2024-11-10 09:48:15,018][00444] Fps is (10 sec: 3277.6, 60 sec: 3550.1, 300 sec: 3526.7). Total num frames: 1228800. Throughput: 0: 886.5. Samples: 306960. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:48:15,023][00444] Avg episode reward: [(0, '5.186')] |
|
[2024-11-10 09:48:20,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 1249280. Throughput: 0: 896.2. Samples: 312278. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:48:20,021][00444] Avg episode reward: [(0, '5.283')] |
|
[2024-11-10 09:48:24,713][04571] Updated weights for policy 0, policy_version 310 (0.0012) |
|
[2024-11-10 09:48:25,019][00444] Fps is (10 sec: 4095.8, 60 sec: 3618.1, 300 sec: 3540.6). Total num frames: 1269760. Throughput: 0: 896.1. Samples: 315342. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:48:25,023][00444] Avg episode reward: [(0, '5.479')] |
|
[2024-11-10 09:48:25,032][04558] Saving new best policy, reward=5.479! |
|
[2024-11-10 09:48:30,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3526.8). Total num frames: 1282048. Throughput: 0: 884.4. Samples: 320302. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:48:30,023][00444] Avg episode reward: [(0, '5.702')] |
|
[2024-11-10 09:48:30,031][04558] Saving new best policy, reward=5.702! |
|
[2024-11-10 09:48:35,018][00444] Fps is (10 sec: 3277.0, 60 sec: 3618.1, 300 sec: 3540.6). Total num frames: 1302528. Throughput: 0: 890.2. Samples: 325516. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:48:35,021][00444] Avg episode reward: [(0, '5.922')] |
|
[2024-11-10 09:48:35,030][04558] Saving new best policy, reward=5.922! |
|
[2024-11-10 09:48:36,872][04571] Updated weights for policy 0, policy_version 320 (0.0016) |
|
[2024-11-10 09:48:40,018][00444] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 1323008. Throughput: 0: 887.6. Samples: 328444. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:48:40,021][00444] Avg episode reward: [(0, '6.160')] |
|
[2024-11-10 09:48:40,026][04558] Saving new best policy, reward=6.160! |
|
[2024-11-10 09:48:45,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 1335296. Throughput: 0: 881.3. Samples: 333332. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:48:45,024][00444] Avg episode reward: [(0, '5.861')] |
|
[2024-11-10 09:48:48,832][04571] Updated weights for policy 0, policy_version 330 (0.0014) |
|
[2024-11-10 09:48:50,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 1355776. Throughput: 0: 886.6. Samples: 338616. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:48:50,021][00444] Avg episode reward: [(0, '6.063')] |
|
[2024-11-10 09:48:55,018][00444] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 1376256. Throughput: 0: 886.9. Samples: 341674. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:48:55,020][00444] Avg episode reward: [(0, '6.488')] |
|
[2024-11-10 09:48:55,031][04558] Saving new best policy, reward=6.488! |
|
[2024-11-10 09:49:00,019][00444] Fps is (10 sec: 3276.5, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 1388544. Throughput: 0: 878.5. Samples: 346492. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:49:00,025][00444] Avg episode reward: [(0, '6.577')] |
|
[2024-11-10 09:49:00,032][04558] Saving new best policy, reward=6.577! |
|
[2024-11-10 09:49:00,940][04571] Updated weights for policy 0, policy_version 340 (0.0013) |
|
[2024-11-10 09:49:05,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3550.0, 300 sec: 3540.6). Total num frames: 1409024. Throughput: 0: 880.4. Samples: 351898. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:49:05,023][00444] Avg episode reward: [(0, '6.941')] |
|
[2024-11-10 09:49:05,038][04558] Saving new best policy, reward=6.941! |
|
[2024-11-10 09:49:10,018][00444] Fps is (10 sec: 3686.7, 60 sec: 3481.6, 300 sec: 3526.7). Total num frames: 1425408. Throughput: 0: 877.2. Samples: 354814. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-11-10 09:49:10,025][00444] Avg episode reward: [(0, '6.807')] |
|
[2024-11-10 09:49:11,663][04571] Updated weights for policy 0, policy_version 350 (0.0017) |
|
[2024-11-10 09:49:15,020][00444] Fps is (10 sec: 3276.2, 60 sec: 3549.8, 300 sec: 3540.6). Total num frames: 1441792. Throughput: 0: 873.7. Samples: 359622. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:49:15,028][00444] Avg episode reward: [(0, '6.906')] |
|
[2024-11-10 09:49:20,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3526.7). Total num frames: 1458176. Throughput: 0: 879.9. Samples: 365110. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:49:20,020][00444] Avg episode reward: [(0, '7.040')] |
|
[2024-11-10 09:49:20,022][04558] Saving new best policy, reward=7.040! |
|
[2024-11-10 09:49:23,149][04571] Updated weights for policy 0, policy_version 360 (0.0012) |
|
[2024-11-10 09:49:25,021][00444] Fps is (10 sec: 3686.0, 60 sec: 3481.5, 300 sec: 3526.7). Total num frames: 1478656. Throughput: 0: 880.9. Samples: 368088. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:49:25,024][00444] Avg episode reward: [(0, '7.506')] |
|
[2024-11-10 09:49:25,036][04558] Saving new best policy, reward=7.506! |
|
[2024-11-10 09:49:30,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3526.7). Total num frames: 1490944. Throughput: 0: 877.0. Samples: 372798. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:49:30,020][00444] Avg episode reward: [(0, '7.550')] |
|
[2024-11-10 09:49:30,029][04558] Saving new best policy, reward=7.550! |
|
[2024-11-10 09:49:35,018][00444] Fps is (10 sec: 3277.8, 60 sec: 3481.6, 300 sec: 3526.7). Total num frames: 1511424. Throughput: 0: 884.1. Samples: 378402. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-11-10 09:49:35,022][00444] Avg episode reward: [(0, '7.991')] |
|
[2024-11-10 09:49:35,035][04558] Saving new best policy, reward=7.991! |
|
[2024-11-10 09:49:35,244][04571] Updated weights for policy 0, policy_version 370 (0.0015) |
|
[2024-11-10 09:49:40,018][00444] Fps is (10 sec: 4096.0, 60 sec: 3481.6, 300 sec: 3526.7). Total num frames: 1531904. Throughput: 0: 882.6. Samples: 381390. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:49:40,024][00444] Avg episode reward: [(0, '7.691')] |
|
[2024-11-10 09:49:45,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 1548288. Throughput: 0: 877.8. Samples: 385992. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:49:45,024][00444] Avg episode reward: [(0, '8.183')] |
|
[2024-11-10 09:49:45,034][04558] Saving new best policy, reward=8.183! |
|
[2024-11-10 09:49:47,066][04571] Updated weights for policy 0, policy_version 380 (0.0012) |
|
[2024-11-10 09:49:50,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3526.7). Total num frames: 1564672. Throughput: 0: 882.3. Samples: 391600. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:49:50,021][00444] Avg episode reward: [(0, '8.023')] |
|
[2024-11-10 09:49:55,021][00444] Fps is (10 sec: 3685.2, 60 sec: 3481.4, 300 sec: 3526.7). Total num frames: 1585152. Throughput: 0: 883.4. Samples: 394572. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:49:55,024][00444] Avg episode reward: [(0, '8.207')] |
|
[2024-11-10 09:49:55,036][04558] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000387_1585152.pth... |
|
[2024-11-10 09:49:55,155][04558] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000179_733184.pth |
|
[2024-11-10 09:49:55,166][04558] Saving new best policy, reward=8.207! |
|
[2024-11-10 09:49:59,275][04571] Updated weights for policy 0, policy_version 390 (0.0012) |
|
[2024-11-10 09:50:00,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3526.7). Total num frames: 1597440. Throughput: 0: 875.1. Samples: 399002. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:50:00,021][00444] Avg episode reward: [(0, '8.765')] |
|
[2024-11-10 09:50:00,026][04558] Saving new best policy, reward=8.765! |
|
[2024-11-10 09:50:05,019][00444] Fps is (10 sec: 3277.8, 60 sec: 3481.6, 300 sec: 3526.7). Total num frames: 1617920. Throughput: 0: 878.3. Samples: 404636. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:50:05,023][00444] Avg episode reward: [(0, '10.069')] |
|
[2024-11-10 09:50:05,039][04558] Saving new best policy, reward=10.069! |
|
[2024-11-10 09:50:09,582][04571] Updated weights for policy 0, policy_version 400 (0.0013) |
|
[2024-11-10 09:50:10,018][00444] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 1638400. Throughput: 0: 880.5. Samples: 407710. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:50:10,029][00444] Avg episode reward: [(0, '11.136')] |
|
[2024-11-10 09:50:10,031][04558] Saving new best policy, reward=11.136! |
|
[2024-11-10 09:50:15,018][00444] Fps is (10 sec: 3276.9, 60 sec: 3481.7, 300 sec: 3526.8). Total num frames: 1650688. Throughput: 0: 874.0. Samples: 412130. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:50:15,020][00444] Avg episode reward: [(0, '11.660')] |
|
[2024-11-10 09:50:15,038][04558] Saving new best policy, reward=11.660! |
|
[2024-11-10 09:50:20,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3526.7). Total num frames: 1671168. Throughput: 0: 879.1. Samples: 417962. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:50:20,024][00444] Avg episode reward: [(0, '11.265')] |
|
[2024-11-10 09:50:21,395][04571] Updated weights for policy 0, policy_version 410 (0.0012) |
|
[2024-11-10 09:50:25,018][00444] Fps is (10 sec: 4096.0, 60 sec: 3550.0, 300 sec: 3540.6). Total num frames: 1691648. Throughput: 0: 879.3. Samples: 420960. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:50:25,024][00444] Avg episode reward: [(0, '11.463')] |
|
[2024-11-10 09:50:30,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 1703936. Throughput: 0: 876.4. Samples: 425432. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:50:30,020][00444] Avg episode reward: [(0, '12.537')] |
|
[2024-11-10 09:50:30,028][04558] Saving new best policy, reward=12.537! |
|
[2024-11-10 09:50:33,480][04571] Updated weights for policy 0, policy_version 420 (0.0011) |
|
[2024-11-10 09:50:35,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 1724416. Throughput: 0: 882.0. Samples: 431290. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:50:35,023][00444] Avg episode reward: [(0, '13.596')] |
|
[2024-11-10 09:50:35,038][04558] Saving new best policy, reward=13.596! |
|
[2024-11-10 09:50:40,018][00444] Fps is (10 sec: 4096.0, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 1744896. Throughput: 0: 883.1. Samples: 434308. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:50:40,022][00444] Avg episode reward: [(0, '14.083')] |
|
[2024-11-10 09:50:40,028][04558] Saving new best policy, reward=14.083! |
|
[2024-11-10 09:50:45,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3540.6). Total num frames: 1757184. Throughput: 0: 878.6. Samples: 438538. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:50:45,030][00444] Avg episode reward: [(0, '14.421')] |
|
[2024-11-10 09:50:45,045][04558] Saving new best policy, reward=14.421! |
|
[2024-11-10 09:50:45,464][04571] Updated weights for policy 0, policy_version 430 (0.0012) |
|
[2024-11-10 09:50:50,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 1777664. Throughput: 0: 887.9. Samples: 444590. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:50:50,026][00444] Avg episode reward: [(0, '13.849')] |
|
[2024-11-10 09:50:55,020][00444] Fps is (10 sec: 4095.4, 60 sec: 3550.0, 300 sec: 3554.5). Total num frames: 1798144. Throughput: 0: 887.7. Samples: 447656. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:50:55,022][00444] Avg episode reward: [(0, '12.863')] |
|
[2024-11-10 09:50:56,463][04571] Updated weights for policy 0, policy_version 440 (0.0013) |
|
[2024-11-10 09:51:00,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 1810432. Throughput: 0: 885.8. Samples: 451990. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:51:00,022][00444] Avg episode reward: [(0, '12.365')] |
|
[2024-11-10 09:51:05,018][00444] Fps is (10 sec: 3277.2, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 1830912. Throughput: 0: 891.2. Samples: 458068. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:51:05,021][00444] Avg episode reward: [(0, '12.865')] |
|
[2024-11-10 09:51:07,223][04571] Updated weights for policy 0, policy_version 450 (0.0012) |
|
[2024-11-10 09:51:10,019][00444] Fps is (10 sec: 4095.9, 60 sec: 3549.8, 300 sec: 3554.5). Total num frames: 1851392. Throughput: 0: 892.6. Samples: 461128. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:51:10,021][00444] Avg episode reward: [(0, '12.468')] |
|
[2024-11-10 09:51:15,020][00444] Fps is (10 sec: 3685.9, 60 sec: 3618.1, 300 sec: 3554.5). Total num frames: 1867776. Throughput: 0: 889.2. Samples: 465446. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:51:15,027][00444] Avg episode reward: [(0, '12.537')] |
|
[2024-11-10 09:51:19,041][04571] Updated weights for policy 0, policy_version 460 (0.0013) |
|
[2024-11-10 09:51:20,022][00444] Fps is (10 sec: 3275.5, 60 sec: 3549.6, 300 sec: 3540.6). Total num frames: 1884160. Throughput: 0: 895.0. Samples: 471568. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:51:20,028][00444] Avg episode reward: [(0, '12.141')] |
|
[2024-11-10 09:51:25,018][00444] Fps is (10 sec: 3686.9, 60 sec: 3549.9, 300 sec: 3554.5). Total num frames: 1904640. Throughput: 0: 895.0. Samples: 474584. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:51:25,027][00444] Avg episode reward: [(0, '11.972')] |
|
[2024-11-10 09:51:30,019][00444] Fps is (10 sec: 3687.5, 60 sec: 3618.1, 300 sec: 3554.5). Total num frames: 1921024. Throughput: 0: 897.9. Samples: 478946. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:51:30,022][00444] Avg episode reward: [(0, '12.128')] |
|
[2024-11-10 09:51:30,885][04571] Updated weights for policy 0, policy_version 470 (0.0013) |
|
[2024-11-10 09:51:35,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3554.5). Total num frames: 1941504. Throughput: 0: 898.6. Samples: 485028. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:51:35,025][00444] Avg episode reward: [(0, '12.638')] |
|
[2024-11-10 09:51:40,020][00444] Fps is (10 sec: 3686.3, 60 sec: 3549.8, 300 sec: 3554.5). Total num frames: 1957888. Throughput: 0: 898.7. Samples: 488098. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:51:40,024][00444] Avg episode reward: [(0, '13.805')] |
|
[2024-11-10 09:51:42,643][04571] Updated weights for policy 0, policy_version 480 (0.0013) |
|
[2024-11-10 09:51:45,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3554.5). Total num frames: 1974272. Throughput: 0: 899.8. Samples: 492482. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:51:45,021][00444] Avg episode reward: [(0, '14.370')] |
|
[2024-11-10 09:51:50,018][00444] Fps is (10 sec: 3686.9, 60 sec: 3618.1, 300 sec: 3554.5). Total num frames: 1994752. Throughput: 0: 902.2. Samples: 498668. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:51:50,024][00444] Avg episode reward: [(0, '15.261')] |
|
[2024-11-10 09:51:50,026][04558] Saving new best policy, reward=15.261! |
|
[2024-11-10 09:51:52,881][04571] Updated weights for policy 0, policy_version 490 (0.0013) |
|
[2024-11-10 09:51:55,018][00444] Fps is (10 sec: 3686.3, 60 sec: 3549.9, 300 sec: 3554.5). Total num frames: 2011136. Throughput: 0: 898.2. Samples: 501546. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:51:55,021][00444] Avg episode reward: [(0, '16.082')] |
|
[2024-11-10 09:51:55,037][04558] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000491_2011136.pth... |
|
[2024-11-10 09:51:55,040][00444] Components not started: RolloutWorker_w2, RolloutWorker_w4, RolloutWorker_w5, RolloutWorker_w6, RolloutWorker_w7, wait_time=600.0 seconds |
|
[2024-11-10 09:51:55,142][04558] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000284_1163264.pth |
|
[2024-11-10 09:51:55,154][04558] Saving new best policy, reward=16.082! |
|
[2024-11-10 09:52:00,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3554.5). Total num frames: 2027520. Throughput: 0: 896.3. Samples: 505780. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:52:00,021][00444] Avg episode reward: [(0, '16.118')] |
|
[2024-11-10 09:52:00,024][04558] Saving new best policy, reward=16.118! |
|
[2024-11-10 09:52:04,689][04571] Updated weights for policy 0, policy_version 500 (0.0012) |
|
[2024-11-10 09:52:05,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3554.5). Total num frames: 2048000. Throughput: 0: 894.3. Samples: 511806. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:52:05,023][00444] Avg episode reward: [(0, '16.332')] |
|
[2024-11-10 09:52:05,031][04558] Saving new best policy, reward=16.332! |
|
[2024-11-10 09:52:10,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3481.6, 300 sec: 3540.7). Total num frames: 2060288. Throughput: 0: 889.0. Samples: 514590. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:52:10,021][00444] Avg episode reward: [(0, '15.476')] |
|
[2024-11-10 09:52:15,018][00444] Fps is (10 sec: 3276.9, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 2080768. Throughput: 0: 893.0. Samples: 519130. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:52:15,021][00444] Avg episode reward: [(0, '15.606')] |
|
[2024-11-10 09:52:16,738][04571] Updated weights for policy 0, policy_version 510 (0.0014) |
|
[2024-11-10 09:52:20,018][00444] Fps is (10 sec: 4096.0, 60 sec: 3618.4, 300 sec: 3554.5). Total num frames: 2101248. Throughput: 0: 894.5. Samples: 525282. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:52:20,021][00444] Avg episode reward: [(0, '15.495')] |
|
[2024-11-10 09:52:25,020][00444] Fps is (10 sec: 3685.6, 60 sec: 3549.7, 300 sec: 3554.5). Total num frames: 2117632. Throughput: 0: 889.3. Samples: 528116. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:52:25,024][00444] Avg episode reward: [(0, '16.338')] |
|
[2024-11-10 09:52:25,040][04558] Saving new best policy, reward=16.338! |
|
[2024-11-10 09:52:28,451][04571] Updated weights for policy 0, policy_version 520 (0.0015) |
|
[2024-11-10 09:52:30,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3554.5). Total num frames: 2134016. Throughput: 0: 891.9. Samples: 532616. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:52:30,025][00444] Avg episode reward: [(0, '17.896')] |
|
[2024-11-10 09:52:30,027][04558] Saving new best policy, reward=17.896! |
|
[2024-11-10 09:52:35,019][00444] Fps is (10 sec: 3686.9, 60 sec: 3549.8, 300 sec: 3540.6). Total num frames: 2154496. Throughput: 0: 887.3. Samples: 538596. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:52:35,031][00444] Avg episode reward: [(0, '18.445')] |
|
[2024-11-10 09:52:35,049][04558] Saving new best policy, reward=18.445! |
|
[2024-11-10 09:52:39,849][04571] Updated weights for policy 0, policy_version 530 (0.0011) |
|
[2024-11-10 09:52:40,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3550.0, 300 sec: 3554.5). Total num frames: 2170880. Throughput: 0: 883.1. Samples: 541284. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:52:40,024][00444] Avg episode reward: [(0, '18.589')] |
|
[2024-11-10 09:52:40,027][04558] Saving new best policy, reward=18.589! |
|
[2024-11-10 09:52:45,018][00444] Fps is (10 sec: 3277.1, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 2187264. Throughput: 0: 889.7. Samples: 545816. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:52:45,021][00444] Avg episode reward: [(0, '19.094')] |
|
[2024-11-10 09:52:45,035][04558] Saving new best policy, reward=19.094! |
|
[2024-11-10 09:52:50,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3540.6). Total num frames: 2207744. Throughput: 0: 891.7. Samples: 551934. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:52:50,024][00444] Avg episode reward: [(0, '18.222')] |
|
[2024-11-10 09:52:50,675][04571] Updated weights for policy 0, policy_version 540 (0.0015) |
|
[2024-11-10 09:52:55,022][00444] Fps is (10 sec: 3684.9, 60 sec: 3549.6, 300 sec: 3554.5). Total num frames: 2224128. Throughput: 0: 888.1. Samples: 554558. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:52:55,025][00444] Avg episode reward: [(0, '16.354')] |
|
[2024-11-10 09:53:00,021][00444] Fps is (10 sec: 3276.0, 60 sec: 3549.7, 300 sec: 3540.6). Total num frames: 2240512. Throughput: 0: 895.2. Samples: 559418. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:53:00,022][00444] Avg episode reward: [(0, '16.888')] |
|
[2024-11-10 09:53:02,318][04571] Updated weights for policy 0, policy_version 550 (0.0014) |
|
[2024-11-10 09:53:05,021][00444] Fps is (10 sec: 3687.1, 60 sec: 3549.7, 300 sec: 3540.6). Total num frames: 2260992. Throughput: 0: 894.4. Samples: 565530. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:53:05,026][00444] Avg episode reward: [(0, '15.742')] |
|
[2024-11-10 09:53:10,018][00444] Fps is (10 sec: 3687.3, 60 sec: 3618.1, 300 sec: 3554.5). Total num frames: 2277376. Throughput: 0: 888.4. Samples: 568094. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:53:10,021][00444] Avg episode reward: [(0, '15.605')] |
|
[2024-11-10 09:53:14,043][04571] Updated weights for policy 0, policy_version 560 (0.0018) |
|
[2024-11-10 09:53:15,025][00444] Fps is (10 sec: 3684.9, 60 sec: 3617.8, 300 sec: 3554.4). Total num frames: 2297856. Throughput: 0: 897.0. Samples: 572986. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:53:15,031][00444] Avg episode reward: [(0, '15.876')] |
|
[2024-11-10 09:53:20,018][00444] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3554.5). Total num frames: 2318336. Throughput: 0: 902.5. Samples: 579206. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:53:20,026][00444] Avg episode reward: [(0, '16.989')] |
|
[2024-11-10 09:53:25,018][00444] Fps is (10 sec: 3278.8, 60 sec: 3550.0, 300 sec: 3554.5). Total num frames: 2330624. Throughput: 0: 895.6. Samples: 581584. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:53:25,020][00444] Avg episode reward: [(0, '17.415')] |
|
[2024-11-10 09:53:25,605][04571] Updated weights for policy 0, policy_version 570 (0.0014) |
|
[2024-11-10 09:53:30,020][00444] Fps is (10 sec: 3276.2, 60 sec: 3618.0, 300 sec: 3554.5). Total num frames: 2351104. Throughput: 0: 909.0. Samples: 586724. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:53:30,026][00444] Avg episode reward: [(0, '18.813')] |
|
[2024-11-10 09:53:35,020][00444] Fps is (10 sec: 4095.5, 60 sec: 3618.1, 300 sec: 3554.5). Total num frames: 2371584. Throughput: 0: 909.5. Samples: 592864. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:53:35,022][00444] Avg episode reward: [(0, '19.454')] |
|
[2024-11-10 09:53:35,032][04558] Saving new best policy, reward=19.454! |
|
[2024-11-10 09:53:36,053][04571] Updated weights for policy 0, policy_version 580 (0.0012) |
|
[2024-11-10 09:53:40,018][00444] Fps is (10 sec: 3277.4, 60 sec: 3549.9, 300 sec: 3554.5). Total num frames: 2383872. Throughput: 0: 898.4. Samples: 594984. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:53:40,021][00444] Avg episode reward: [(0, '19.740')] |
|
[2024-11-10 09:53:40,024][04558] Saving new best policy, reward=19.740! |
|
[2024-11-10 09:53:45,020][00444] Fps is (10 sec: 3276.8, 60 sec: 3618.0, 300 sec: 3554.5). Total num frames: 2404352. Throughput: 0: 906.6. Samples: 600214. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:53:45,021][00444] Avg episode reward: [(0, '20.095')] |
|
[2024-11-10 09:53:45,030][04558] Saving new best policy, reward=20.095! |
|
[2024-11-10 09:53:47,493][04571] Updated weights for policy 0, policy_version 590 (0.0012) |
|
[2024-11-10 09:53:50,021][00444] Fps is (10 sec: 4094.7, 60 sec: 3617.9, 300 sec: 3554.5). Total num frames: 2424832. Throughput: 0: 906.8. Samples: 606338. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:53:50,024][00444] Avg episode reward: [(0, '19.529')] |
|
[2024-11-10 09:53:55,018][00444] Fps is (10 sec: 3277.2, 60 sec: 3550.1, 300 sec: 3554.5). Total num frames: 2437120. Throughput: 0: 894.3. Samples: 608338. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:53:55,023][00444] Avg episode reward: [(0, '18.970')] |
|
[2024-11-10 09:53:55,034][04558] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000595_2437120.pth... |
|
[2024-11-10 09:53:55,130][04558] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000387_1585152.pth |
|
[2024-11-10 09:53:59,255][04571] Updated weights for policy 0, policy_version 600 (0.0012) |
|
[2024-11-10 09:54:00,018][00444] Fps is (10 sec: 3277.8, 60 sec: 3618.3, 300 sec: 3554.5). Total num frames: 2457600. Throughput: 0: 904.2. Samples: 613668. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:54:00,023][00444] Avg episode reward: [(0, '18.109')] |
|
[2024-11-10 09:54:05,018][00444] Fps is (10 sec: 4096.1, 60 sec: 3618.3, 300 sec: 3568.4). Total num frames: 2478080. Throughput: 0: 901.7. Samples: 619782. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-11-10 09:54:05,021][00444] Avg episode reward: [(0, '19.712')] |
|
[2024-11-10 09:54:10,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3554.5). Total num frames: 2490368. Throughput: 0: 892.2. Samples: 621734. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:54:10,023][00444] Avg episode reward: [(0, '20.866')] |
|
[2024-11-10 09:54:10,057][04558] Saving new best policy, reward=20.866! |
|
[2024-11-10 09:54:11,152][04571] Updated weights for policy 0, policy_version 610 (0.0012) |
|
[2024-11-10 09:54:15,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3550.2, 300 sec: 3568.4). Total num frames: 2510848. Throughput: 0: 897.1. Samples: 627094. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-11-10 09:54:15,027][00444] Avg episode reward: [(0, '21.293')] |
|
[2024-11-10 09:54:15,066][04558] Saving new best policy, reward=21.293! |
|
[2024-11-10 09:54:20,021][00444] Fps is (10 sec: 4094.9, 60 sec: 3549.7, 300 sec: 3568.4). Total num frames: 2531328. Throughput: 0: 894.3. Samples: 633108. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:54:20,027][00444] Avg episode reward: [(0, '23.177')] |
|
[2024-11-10 09:54:20,028][04558] Saving new best policy, reward=23.177! |
|
[2024-11-10 09:54:22,450][04571] Updated weights for policy 0, policy_version 620 (0.0012) |
|
[2024-11-10 09:54:25,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3582.3). Total num frames: 2547712. Throughput: 0: 889.6. Samples: 635018. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:54:25,024][00444] Avg episode reward: [(0, '23.752')] |
|
[2024-11-10 09:54:25,037][04558] Saving new best policy, reward=23.752! |
|
[2024-11-10 09:54:30,020][00444] Fps is (10 sec: 3686.8, 60 sec: 3618.1, 300 sec: 3582.2). Total num frames: 2568192. Throughput: 0: 900.0. Samples: 640716. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:54:30,022][00444] Avg episode reward: [(0, '24.386')] |
|
[2024-11-10 09:54:30,028][04558] Saving new best policy, reward=24.386! |
|
[2024-11-10 09:54:33,120][04571] Updated weights for policy 0, policy_version 630 (0.0011) |
|
[2024-11-10 09:54:35,018][00444] Fps is (10 sec: 3686.3, 60 sec: 3549.9, 300 sec: 3568.4). Total num frames: 2584576. Throughput: 0: 889.1. Samples: 646346. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:54:35,020][00444] Avg episode reward: [(0, '23.112')] |
|
[2024-11-10 09:54:40,019][00444] Fps is (10 sec: 3276.9, 60 sec: 3618.1, 300 sec: 3568.4). Total num frames: 2600960. Throughput: 0: 887.2. Samples: 648264. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:54:40,026][00444] Avg episode reward: [(0, '22.059')] |
|
[2024-11-10 09:54:44,909][04571] Updated weights for policy 0, policy_version 640 (0.0012) |
|
[2024-11-10 09:54:45,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3618.2, 300 sec: 3582.3). Total num frames: 2621440. Throughput: 0: 896.2. Samples: 653996. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:54:45,021][00444] Avg episode reward: [(0, '21.490')] |
|
[2024-11-10 09:54:50,018][00444] Fps is (10 sec: 3686.8, 60 sec: 3550.1, 300 sec: 3568.4). Total num frames: 2637824. Throughput: 0: 886.3. Samples: 659664. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:54:50,024][00444] Avg episode reward: [(0, '20.058')] |
|
[2024-11-10 09:54:55,018][00444] Fps is (10 sec: 3276.7, 60 sec: 3618.1, 300 sec: 3582.3). Total num frames: 2654208. Throughput: 0: 884.1. Samples: 661520. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:54:55,025][00444] Avg episode reward: [(0, '20.488')] |
|
[2024-11-10 09:54:56,827][04571] Updated weights for policy 0, policy_version 650 (0.0018) |
|
[2024-11-10 09:55:00,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3582.3). Total num frames: 2674688. Throughput: 0: 894.3. Samples: 667338. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:55:00,025][00444] Avg episode reward: [(0, '19.802')] |
|
[2024-11-10 09:55:05,018][00444] Fps is (10 sec: 3686.5, 60 sec: 3549.9, 300 sec: 3568.4). Total num frames: 2691072. Throughput: 0: 882.2. Samples: 672806. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:55:05,021][00444] Avg episode reward: [(0, '20.554')] |
|
[2024-11-10 09:55:08,834][04571] Updated weights for policy 0, policy_version 660 (0.0012) |
|
[2024-11-10 09:55:10,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3582.3). Total num frames: 2707456. Throughput: 0: 881.2. Samples: 674670. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:55:10,023][00444] Avg episode reward: [(0, '20.408')] |
|
[2024-11-10 09:55:15,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3582.3). Total num frames: 2727936. Throughput: 0: 888.6. Samples: 680700. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:55:15,024][00444] Avg episode reward: [(0, '21.110')] |
|
[2024-11-10 09:55:19,412][04571] Updated weights for policy 0, policy_version 670 (0.0012) |
|
[2024-11-10 09:55:20,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3550.0, 300 sec: 3568.4). Total num frames: 2744320. Throughput: 0: 885.5. Samples: 686192. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:55:20,022][00444] Avg episode reward: [(0, '21.047')] |
|
[2024-11-10 09:55:25,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3582.3). Total num frames: 2760704. Throughput: 0: 887.5. Samples: 688200. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:55:25,022][00444] Avg episode reward: [(0, '21.279')] |
|
[2024-11-10 09:55:30,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3550.0, 300 sec: 3582.3). Total num frames: 2781184. Throughput: 0: 897.2. Samples: 694370. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:55:30,022][00444] Avg episode reward: [(0, '20.567')] |
|
[2024-11-10 09:55:30,500][04571] Updated weights for policy 0, policy_version 680 (0.0012) |
|
[2024-11-10 09:55:35,019][00444] Fps is (10 sec: 3686.0, 60 sec: 3549.8, 300 sec: 3568.4). Total num frames: 2797568. Throughput: 0: 887.5. Samples: 699602. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:55:35,022][00444] Avg episode reward: [(0, '20.719')] |
|
[2024-11-10 09:55:40,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3582.3). Total num frames: 2813952. Throughput: 0: 892.6. Samples: 701688. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:55:40,020][00444] Avg episode reward: [(0, '21.586')] |
|
[2024-11-10 09:55:42,399][04571] Updated weights for policy 0, policy_version 690 (0.0013) |
|
[2024-11-10 09:55:45,018][00444] Fps is (10 sec: 3686.8, 60 sec: 3549.9, 300 sec: 3582.3). Total num frames: 2834432. Throughput: 0: 898.4. Samples: 707768. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:55:45,025][00444] Avg episode reward: [(0, '20.252')] |
|
[2024-11-10 09:55:50,018][00444] Fps is (10 sec: 3686.3, 60 sec: 3549.9, 300 sec: 3568.4). Total num frames: 2850816. Throughput: 0: 889.6. Samples: 712840. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:55:50,020][00444] Avg episode reward: [(0, '21.457')] |
|
[2024-11-10 09:55:54,280][04571] Updated weights for policy 0, policy_version 700 (0.0016) |
|
[2024-11-10 09:55:55,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3582.3). Total num frames: 2867200. Throughput: 0: 896.5. Samples: 715014. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:55:55,023][00444] Avg episode reward: [(0, '21.559')] |
|
[2024-11-10 09:55:55,034][04558] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000700_2867200.pth... |
|
[2024-11-10 09:55:55,131][04558] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000491_2011136.pth |
|
[2024-11-10 09:56:00,021][00444] Fps is (10 sec: 3685.3, 60 sec: 3549.7, 300 sec: 3582.2). Total num frames: 2887680. Throughput: 0: 900.0. Samples: 721204. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-11-10 09:56:00,023][00444] Avg episode reward: [(0, '20.793')] |
|
[2024-11-10 09:56:05,021][00444] Fps is (10 sec: 3685.3, 60 sec: 3549.7, 300 sec: 3568.3). Total num frames: 2904064. Throughput: 0: 891.2. Samples: 726300. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:56:05,026][00444] Avg episode reward: [(0, '19.899')] |
|
[2024-11-10 09:56:05,808][04571] Updated weights for policy 0, policy_version 710 (0.0015) |
|
[2024-11-10 09:56:10,018][00444] Fps is (10 sec: 3687.6, 60 sec: 3618.1, 300 sec: 3582.3). Total num frames: 2924544. Throughput: 0: 898.3. Samples: 728624. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:56:10,020][00444] Avg episode reward: [(0, '20.836')] |
|
[2024-11-10 09:56:15,018][00444] Fps is (10 sec: 4097.2, 60 sec: 3618.1, 300 sec: 3596.2). Total num frames: 2945024. Throughput: 0: 900.3. Samples: 734882. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:56:15,020][00444] Avg episode reward: [(0, '20.692')] |
|
[2024-11-10 09:56:15,950][04571] Updated weights for policy 0, policy_version 720 (0.0013) |
|
[2024-11-10 09:56:20,018][00444] Fps is (10 sec: 3276.7, 60 sec: 3549.9, 300 sec: 3568.4). Total num frames: 2957312. Throughput: 0: 891.0. Samples: 739698. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:56:20,021][00444] Avg episode reward: [(0, '20.546')] |
|
[2024-11-10 09:56:25,018][00444] Fps is (10 sec: 3276.9, 60 sec: 3618.1, 300 sec: 3582.3). Total num frames: 2977792. Throughput: 0: 896.9. Samples: 742050. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:56:25,021][00444] Avg episode reward: [(0, '20.224')] |
|
[2024-11-10 09:56:27,921][04571] Updated weights for policy 0, policy_version 730 (0.0012) |
|
[2024-11-10 09:56:30,018][00444] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3582.3). Total num frames: 2998272. Throughput: 0: 898.6. Samples: 748206. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:56:30,020][00444] Avg episode reward: [(0, '21.143')] |
|
[2024-11-10 09:56:35,023][00444] Fps is (10 sec: 3275.3, 60 sec: 3549.7, 300 sec: 3568.3). Total num frames: 3010560. Throughput: 0: 894.3. Samples: 753086. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:56:35,025][00444] Avg episode reward: [(0, '21.049')] |
|
[2024-11-10 09:56:39,679][04571] Updated weights for policy 0, policy_version 740 (0.0012) |
|
[2024-11-10 09:56:40,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3582.3). Total num frames: 3031040. Throughput: 0: 902.7. Samples: 755634. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:56:40,022][00444] Avg episode reward: [(0, '22.212')] |
|
[2024-11-10 09:56:45,020][00444] Fps is (10 sec: 4097.0, 60 sec: 3618.0, 300 sec: 3582.2). Total num frames: 3051520. Throughput: 0: 902.6. Samples: 761822. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-11-10 09:56:45,022][00444] Avg episode reward: [(0, '22.581')] |
|
[2024-11-10 09:56:50,020][00444] Fps is (10 sec: 3276.2, 60 sec: 3549.8, 300 sec: 3568.4). Total num frames: 3063808. Throughput: 0: 896.1. Samples: 766622. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:56:50,027][00444] Avg episode reward: [(0, '23.222')] |
|
[2024-11-10 09:56:51,349][04571] Updated weights for policy 0, policy_version 750 (0.0017) |
|
[2024-11-10 09:56:55,018][00444] Fps is (10 sec: 3277.5, 60 sec: 3618.1, 300 sec: 3582.3). Total num frames: 3084288. Throughput: 0: 901.5. Samples: 769190. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:56:55,021][00444] Avg episode reward: [(0, '22.868')] |
|
[2024-11-10 09:57:00,018][00444] Fps is (10 sec: 4096.8, 60 sec: 3618.3, 300 sec: 3582.3). Total num frames: 3104768. Throughput: 0: 900.4. Samples: 775398. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:57:00,023][00444] Avg episode reward: [(0, '24.077')] |
|
[2024-11-10 09:57:01,795][04571] Updated weights for policy 0, policy_version 760 (0.0012) |
|
[2024-11-10 09:57:05,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3550.0, 300 sec: 3582.3). Total num frames: 3117056. Throughput: 0: 895.2. Samples: 779984. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:57:05,026][00444] Avg episode reward: [(0, '22.484')] |
|
[2024-11-10 09:57:10,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3582.3). Total num frames: 3137536. Throughput: 0: 903.2. Samples: 782696. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-11-10 09:57:10,024][00444] Avg episode reward: [(0, '22.647')] |
|
[2024-11-10 09:57:13,059][04571] Updated weights for policy 0, policy_version 770 (0.0012) |
|
[2024-11-10 09:57:15,018][00444] Fps is (10 sec: 4505.6, 60 sec: 3618.1, 300 sec: 3596.1). Total num frames: 3162112. Throughput: 0: 905.4. Samples: 788948. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:57:15,025][00444] Avg episode reward: [(0, '22.609')] |
|
[2024-11-10 09:57:20,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3582.3). Total num frames: 3174400. Throughput: 0: 901.1. Samples: 793630. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-11-10 09:57:20,024][00444] Avg episode reward: [(0, '23.238')] |
|
[2024-11-10 09:57:24,757][04571] Updated weights for policy 0, policy_version 780 (0.0013) |
|
[2024-11-10 09:57:25,018][00444] Fps is (10 sec: 3276.7, 60 sec: 3618.1, 300 sec: 3596.1). Total num frames: 3194880. Throughput: 0: 908.2. Samples: 796502. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:57:25,022][00444] Avg episode reward: [(0, '23.807')] |
|
[2024-11-10 09:57:30,019][00444] Fps is (10 sec: 4095.7, 60 sec: 3618.1, 300 sec: 3596.1). Total num frames: 3215360. Throughput: 0: 909.0. Samples: 802724. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:57:30,028][00444] Avg episode reward: [(0, '23.031')] |
|
[2024-11-10 09:57:35,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3618.4, 300 sec: 3582.3). Total num frames: 3227648. Throughput: 0: 901.5. Samples: 807186. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:57:35,023][00444] Avg episode reward: [(0, '24.226')] |
|
[2024-11-10 09:57:36,379][04571] Updated weights for policy 0, policy_version 790 (0.0012) |
|
[2024-11-10 09:57:40,018][00444] Fps is (10 sec: 3277.1, 60 sec: 3618.1, 300 sec: 3596.1). Total num frames: 3248128. Throughput: 0: 911.8. Samples: 810222. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:57:40,023][00444] Avg episode reward: [(0, '23.810')] |
|
[2024-11-10 09:57:45,018][00444] Fps is (10 sec: 4096.1, 60 sec: 3618.3, 300 sec: 3596.1). Total num frames: 3268608. Throughput: 0: 912.4. Samples: 816456. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:57:45,022][00444] Avg episode reward: [(0, '21.858')] |
|
[2024-11-10 09:57:47,042][04571] Updated weights for policy 0, policy_version 800 (0.0012) |
|
[2024-11-10 09:57:50,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3618.3, 300 sec: 3582.3). Total num frames: 3280896. Throughput: 0: 906.6. Samples: 820782. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-11-10 09:57:50,023][00444] Avg episode reward: [(0, '22.662')] |
|
[2024-11-10 09:57:55,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3596.2). Total num frames: 3301376. Throughput: 0: 913.7. Samples: 823812. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:57:55,025][00444] Avg episode reward: [(0, '21.361')] |
|
[2024-11-10 09:57:55,035][04558] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000806_3301376.pth... |
|
[2024-11-10 09:57:55,154][04558] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000595_2437120.pth |
|
[2024-11-10 09:57:58,087][04571] Updated weights for policy 0, policy_version 810 (0.0012) |
|
[2024-11-10 09:58:00,020][00444] Fps is (10 sec: 4095.2, 60 sec: 3618.0, 300 sec: 3596.2). Total num frames: 3321856. Throughput: 0: 909.8. Samples: 829890. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:58:00,025][00444] Avg episode reward: [(0, '21.613')] |
|
[2024-11-10 09:58:05,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3596.1). Total num frames: 3338240. Throughput: 0: 902.0. Samples: 834222. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:58:05,025][00444] Avg episode reward: [(0, '20.318')] |
|
[2024-11-10 09:58:09,918][04571] Updated weights for policy 0, policy_version 820 (0.0011) |
|
[2024-11-10 09:58:10,018][00444] Fps is (10 sec: 3687.2, 60 sec: 3686.4, 300 sec: 3596.2). Total num frames: 3358720. Throughput: 0: 906.3. Samples: 837284. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:58:10,022][00444] Avg episode reward: [(0, '21.736')] |
|
[2024-11-10 09:58:15,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3582.3). Total num frames: 3375104. Throughput: 0: 905.4. Samples: 843466. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:58:15,023][00444] Avg episode reward: [(0, '20.556')] |
|
[2024-11-10 09:58:20,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3596.2). Total num frames: 3391488. Throughput: 0: 903.6. Samples: 847850. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:58:20,024][00444] Avg episode reward: [(0, '19.989')] |
|
[2024-11-10 09:58:21,583][04571] Updated weights for policy 0, policy_version 830 (0.0013) |
|
[2024-11-10 09:58:25,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3618.1, 300 sec: 3596.2). Total num frames: 3411968. Throughput: 0: 905.4. Samples: 850964. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:58:25,023][00444] Avg episode reward: [(0, '21.079')] |
|
[2024-11-10 09:58:30,018][00444] Fps is (10 sec: 4096.0, 60 sec: 3618.2, 300 sec: 3596.2). Total num frames: 3432448. Throughput: 0: 905.3. Samples: 857194. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:58:30,023][00444] Avg episode reward: [(0, '21.446')] |
|
[2024-11-10 09:58:33,163][04571] Updated weights for policy 0, policy_version 840 (0.0012) |
|
[2024-11-10 09:58:35,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3596.1). Total num frames: 3444736. Throughput: 0: 902.8. Samples: 861406. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:58:35,022][00444] Avg episode reward: [(0, '19.578')] |
|
[2024-11-10 09:58:40,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3596.2). Total num frames: 3465216. Throughput: 0: 904.5. Samples: 864516. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:58:40,026][00444] Avg episode reward: [(0, '19.118')] |
|
[2024-11-10 09:58:43,343][04571] Updated weights for policy 0, policy_version 850 (0.0012) |
|
[2024-11-10 09:58:45,018][00444] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3596.2). Total num frames: 3485696. Throughput: 0: 906.2. Samples: 870668. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:58:45,027][00444] Avg episode reward: [(0, '19.816')] |
|
[2024-11-10 09:58:50,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3610.0). Total num frames: 3502080. Throughput: 0: 908.7. Samples: 875114. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-11-10 09:58:50,022][00444] Avg episode reward: [(0, '19.133')] |
|
[2024-11-10 09:58:55,015][04571] Updated weights for policy 0, policy_version 860 (0.0012) |
|
[2024-11-10 09:58:55,025][00444] Fps is (10 sec: 3684.0, 60 sec: 3686.0, 300 sec: 3610.0). Total num frames: 3522560. Throughput: 0: 906.6. Samples: 878088. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:58:55,034][00444] Avg episode reward: [(0, '21.134')] |
|
[2024-11-10 09:59:00,019][00444] Fps is (10 sec: 3686.0, 60 sec: 3618.2, 300 sec: 3596.1). Total num frames: 3538944. Throughput: 0: 900.9. Samples: 884008. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:59:00,022][00444] Avg episode reward: [(0, '22.147')] |
|
[2024-11-10 09:59:05,018][00444] Fps is (10 sec: 3279.0, 60 sec: 3618.1, 300 sec: 3610.0). Total num frames: 3555328. Throughput: 0: 907.8. Samples: 888700. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:59:05,024][00444] Avg episode reward: [(0, '22.459')] |
|
[2024-11-10 09:59:06,773][04571] Updated weights for policy 0, policy_version 870 (0.0010) |
|
[2024-11-10 09:59:10,018][00444] Fps is (10 sec: 3686.8, 60 sec: 3618.1, 300 sec: 3610.0). Total num frames: 3575808. Throughput: 0: 906.0. Samples: 891736. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:59:10,021][00444] Avg episode reward: [(0, '23.154')] |
|
[2024-11-10 09:59:15,019][00444] Fps is (10 sec: 3686.3, 60 sec: 3618.1, 300 sec: 3596.2). Total num frames: 3592192. Throughput: 0: 893.7. Samples: 897410. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:59:15,025][00444] Avg episode reward: [(0, '23.059')] |
|
[2024-11-10 09:59:18,372][04571] Updated weights for policy 0, policy_version 880 (0.0012) |
|
[2024-11-10 09:59:20,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3596.1). Total num frames: 3608576. Throughput: 0: 910.3. Samples: 902370. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:59:20,022][00444] Avg episode reward: [(0, '23.405')] |
|
[2024-11-10 09:59:25,018][00444] Fps is (10 sec: 3686.5, 60 sec: 3618.1, 300 sec: 3596.2). Total num frames: 3629056. Throughput: 0: 909.7. Samples: 905452. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:59:25,026][00444] Avg episode reward: [(0, '23.189')] |
|
[2024-11-10 09:59:29,314][04571] Updated weights for policy 0, policy_version 890 (0.0013) |
|
[2024-11-10 09:59:30,019][00444] Fps is (10 sec: 3685.9, 60 sec: 3549.8, 300 sec: 3596.1). Total num frames: 3645440. Throughput: 0: 896.0. Samples: 910990. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:59:30,022][00444] Avg episode reward: [(0, '23.307')] |
|
[2024-11-10 09:59:35,021][00444] Fps is (10 sec: 3685.2, 60 sec: 3686.2, 300 sec: 3610.0). Total num frames: 3665920. Throughput: 0: 911.8. Samples: 916150. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 09:59:35,023][00444] Avg episode reward: [(0, '24.070')] |
|
[2024-11-10 09:59:39,756][04571] Updated weights for policy 0, policy_version 900 (0.0012) |
|
[2024-11-10 09:59:40,021][00444] Fps is (10 sec: 4095.6, 60 sec: 3686.3, 300 sec: 3610.0). Total num frames: 3686400. Throughput: 0: 914.8. Samples: 919252. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:59:40,023][00444] Avg episode reward: [(0, '25.930')] |
|
[2024-11-10 09:59:40,032][04558] Saving new best policy, reward=25.930! |
|
[2024-11-10 09:59:45,018][00444] Fps is (10 sec: 3277.8, 60 sec: 3549.9, 300 sec: 3596.1). Total num frames: 3698688. Throughput: 0: 899.8. Samples: 924498. Policy #0 lag: (min: 0.0, avg: 0.3, max: 1.0) |
|
[2024-11-10 09:59:45,021][00444] Avg episode reward: [(0, '26.050')] |
|
[2024-11-10 09:59:45,035][04558] Saving new best policy, reward=26.050! |
|
[2024-11-10 09:59:50,018][00444] Fps is (10 sec: 3277.6, 60 sec: 3618.1, 300 sec: 3610.0). Total num frames: 3719168. Throughput: 0: 914.5. Samples: 929854. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:59:50,023][00444] Avg episode reward: [(0, '26.853')] |
|
[2024-11-10 09:59:50,027][04558] Saving new best policy, reward=26.853! |
|
[2024-11-10 09:59:51,526][04571] Updated weights for policy 0, policy_version 910 (0.0012) |
|
[2024-11-10 09:59:55,018][00444] Fps is (10 sec: 4096.0, 60 sec: 3618.5, 300 sec: 3610.0). Total num frames: 3739648. Throughput: 0: 914.4. Samples: 932882. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 09:59:55,021][00444] Avg episode reward: [(0, '26.611')] |
|
[2024-11-10 09:59:55,030][04558] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000913_3739648.pth... |
|
[2024-11-10 09:59:55,152][04558] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000700_2867200.pth |
|
[2024-11-10 10:00:00,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3596.1). Total num frames: 3751936. Throughput: 0: 896.6. Samples: 937758. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 10:00:00,024][00444] Avg episode reward: [(0, '27.177')] |
|
[2024-11-10 10:00:00,030][04558] Saving new best policy, reward=27.177! |
|
[2024-11-10 10:00:03,423][04571] Updated weights for policy 0, policy_version 920 (0.0012) |
|
[2024-11-10 10:00:05,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3610.0). Total num frames: 3772416. Throughput: 0: 909.1. Samples: 943280. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 10:00:05,026][00444] Avg episode reward: [(0, '26.174')] |
|
[2024-11-10 10:00:10,018][00444] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3610.0). Total num frames: 3792896. Throughput: 0: 907.9. Samples: 946308. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 10:00:10,020][00444] Avg episode reward: [(0, '26.451')] |
|
[2024-11-10 10:00:15,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3549.9, 300 sec: 3596.1). Total num frames: 3805184. Throughput: 0: 892.8. Samples: 951164. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 10:00:15,020][00444] Avg episode reward: [(0, '25.977')] |
|
[2024-11-10 10:00:15,056][04571] Updated weights for policy 0, policy_version 930 (0.0012) |
|
[2024-11-10 10:00:20,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3623.9). Total num frames: 3829760. Throughput: 0: 908.8. Samples: 957044. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 10:00:20,021][00444] Avg episode reward: [(0, '24.198')] |
|
[2024-11-10 10:00:25,018][00444] Fps is (10 sec: 4096.0, 60 sec: 3618.1, 300 sec: 3610.0). Total num frames: 3846144. Throughput: 0: 909.0. Samples: 960156. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 10:00:25,020][00444] Avg episode reward: [(0, '24.409')] |
|
[2024-11-10 10:00:25,248][04571] Updated weights for policy 0, policy_version 940 (0.0012) |
|
[2024-11-10 10:00:30,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3618.2, 300 sec: 3610.0). Total num frames: 3862528. Throughput: 0: 896.1. Samples: 964822. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 10:00:30,021][00444] Avg episode reward: [(0, '25.948')] |
|
[2024-11-10 10:00:35,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3618.3, 300 sec: 3623.9). Total num frames: 3883008. Throughput: 0: 911.1. Samples: 970852. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 10:00:35,020][00444] Avg episode reward: [(0, '25.868')] |
|
[2024-11-10 10:00:36,434][04571] Updated weights for policy 0, policy_version 950 (0.0013) |
|
[2024-11-10 10:00:40,025][00444] Fps is (10 sec: 3683.8, 60 sec: 3549.6, 300 sec: 3609.9). Total num frames: 3899392. Throughput: 0: 911.6. Samples: 973912. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 10:00:40,028][00444] Avg episode reward: [(0, '25.872')] |
|
[2024-11-10 10:00:45,018][00444] Fps is (10 sec: 3276.8, 60 sec: 3618.1, 300 sec: 3610.0). Total num frames: 3915776. Throughput: 0: 900.0. Samples: 978258. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 10:00:45,021][00444] Avg episode reward: [(0, '25.783')] |
|
[2024-11-10 10:00:48,224][04571] Updated weights for policy 0, policy_version 960 (0.0012) |
|
[2024-11-10 10:00:50,018][00444] Fps is (10 sec: 3689.1, 60 sec: 3618.1, 300 sec: 3623.9). Total num frames: 3936256. Throughput: 0: 913.9. Samples: 984404. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 10:00:50,020][00444] Avg episode reward: [(0, '26.009')] |
|
[2024-11-10 10:00:55,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3549.9, 300 sec: 3610.1). Total num frames: 3952640. Throughput: 0: 914.9. Samples: 987478. Policy #0 lag: (min: 0.0, avg: 0.1, max: 1.0) |
|
[2024-11-10 10:00:55,025][00444] Avg episode reward: [(0, '26.484')] |
|
[2024-11-10 10:00:59,835][04571] Updated weights for policy 0, policy_version 970 (0.0012) |
|
[2024-11-10 10:01:00,018][00444] Fps is (10 sec: 3686.4, 60 sec: 3686.4, 300 sec: 3624.0). Total num frames: 3973120. Throughput: 0: 905.1. Samples: 991894. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 10:01:00,021][00444] Avg episode reward: [(0, '24.509')] |
|
[2024-11-10 10:01:05,018][00444] Fps is (10 sec: 4096.0, 60 sec: 3686.4, 300 sec: 3623.9). Total num frames: 3993600. Throughput: 0: 912.0. Samples: 998086. Policy #0 lag: (min: 0.0, avg: 0.2, max: 1.0) |
|
[2024-11-10 10:01:05,020][00444] Avg episode reward: [(0, '24.674')] |
|
[2024-11-10 10:01:08,089][04558] Stopping Batcher_0... |
|
[2024-11-10 10:01:08,090][04558] Loop batcher_evt_loop terminating... |
|
[2024-11-10 10:01:08,090][00444] Component Batcher_0 stopped! |
|
[2024-11-10 10:01:08,095][00444] Component RolloutWorker_w2 process died already! Don't wait for it. |
|
[2024-11-10 10:01:08,100][00444] Component RolloutWorker_w4 process died already! Don't wait for it. |
|
[2024-11-10 10:01:08,102][00444] Component RolloutWorker_w5 process died already! Don't wait for it. |
|
[2024-11-10 10:01:08,108][04558] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-11-10 10:01:08,106][00444] Component RolloutWorker_w6 process died already! Don't wait for it. |
|
[2024-11-10 10:01:08,110][00444] Component RolloutWorker_w7 process died already! Don't wait for it. |
|
[2024-11-10 10:01:08,163][04571] Weights refcount: 2 0 |
|
[2024-11-10 10:01:08,168][00444] Component InferenceWorker_p0-w0 stopped! |
|
[2024-11-10 10:01:08,173][04571] Stopping InferenceWorker_p0-w0... |
|
[2024-11-10 10:01:08,174][04571] Loop inference_proc0-0_evt_loop terminating... |
|
[2024-11-10 10:01:08,267][04558] Removing /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000806_3301376.pth |
|
[2024-11-10 10:01:08,280][04558] Saving /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-11-10 10:01:08,463][00444] Component RolloutWorker_w0 stopped! |
|
[2024-11-10 10:01:08,463][04572] Stopping RolloutWorker_w0... |
|
[2024-11-10 10:01:08,483][04572] Loop rollout_proc0_evt_loop terminating... |
|
[2024-11-10 10:01:08,517][00444] Component LearnerWorker_p0 stopped! |
|
[2024-11-10 10:01:08,519][04558] Stopping LearnerWorker_p0... |
|
[2024-11-10 10:01:08,523][04558] Loop learner_proc0_evt_loop terminating... |
|
[2024-11-10 10:01:08,862][04573] Stopping RolloutWorker_w1... |
|
[2024-11-10 10:01:08,862][04573] Loop rollout_proc1_evt_loop terminating... |
|
[2024-11-10 10:01:08,862][00444] Component RolloutWorker_w1 stopped! |
|
[2024-11-10 10:01:08,922][04574] Stopping RolloutWorker_w3... |
|
[2024-11-10 10:01:08,923][04574] Loop rollout_proc3_evt_loop terminating... |
|
[2024-11-10 10:01:08,919][00444] Component RolloutWorker_w3 stopped! |
|
[2024-11-10 10:01:08,923][00444] Waiting for process learner_proc0 to stop... |
|
[2024-11-10 10:01:10,214][00444] Waiting for process inference_proc0-0 to join... |
|
[2024-11-10 10:01:10,563][00444] Waiting for process rollout_proc0 to join... |
|
[2024-11-10 10:01:11,237][00444] Waiting for process rollout_proc1 to join... |
|
[2024-11-10 10:01:11,298][00444] Waiting for process rollout_proc2 to join... |
|
[2024-11-10 10:01:11,299][00444] Waiting for process rollout_proc3 to join... |
|
[2024-11-10 10:01:11,302][00444] Waiting for process rollout_proc4 to join... |
|
[2024-11-10 10:01:11,304][00444] Waiting for process rollout_proc5 to join... |
|
[2024-11-10 10:01:11,306][00444] Waiting for process rollout_proc6 to join... |
|
[2024-11-10 10:01:11,309][00444] Waiting for process rollout_proc7 to join... |
|
[2024-11-10 10:01:11,310][00444] Batcher 0 profile tree view: |
|
batching: 20.5321, releasing_batches: 0.0239 |
|
[2024-11-10 10:01:11,312][00444] InferenceWorker_p0-w0 profile tree view: |
|
wait_policy: 0.0001 |
|
wait_policy_total: 477.9147 |
|
update_model: 9.6126 |
|
weight_update: 0.0012 |
|
one_step: 0.0054 |
|
handle_policy_step: 596.5470 |
|
deserialize: 15.9543, stack: 3.9302, obs_to_device_normalize: 137.3446, forward: 300.7170, send_messages: 21.7501 |
|
prepare_outputs: 85.1548 |
|
to_cpu: 52.3889 |
|
[2024-11-10 10:01:11,313][00444] Learner 0 profile tree view: |
|
misc: 0.0060, prepare_batch: 14.3056 |
|
train: 67.2994 |
|
epoch_init: 0.0082, minibatch_init: 0.0161, losses_postprocess: 0.4907, kl_divergence: 0.4725, after_optimizer: 32.4475 |
|
calculate_losses: 21.1069 |
|
losses_init: 0.0036, forward_head: 1.4772, bptt_initial: 14.2606, tail: 0.8467, advantages_returns: 0.2096, losses: 2.3386 |
|
bptt: 1.7102 |
|
bptt_forward_core: 1.6330 |
|
update: 12.2537 |
|
clip: 1.3981 |
|
[2024-11-10 10:01:11,315][00444] RolloutWorker_w0 profile tree view: |
|
wait_for_trajectories: 0.6133, enqueue_policy_requests: 330.0985, env_step: 627.5088, overhead: 28.7667, complete_rollouts: 3.6992 |
|
save_policy_outputs: 43.9476 |
|
split_output_tensors: 14.9385 |
|
[2024-11-10 10:01:11,317][00444] Loop Runner_EvtLoop terminating... |
|
[2024-11-10 10:01:11,319][00444] Runner profile tree view: |
|
main_loop: 1152.7260 |
|
[2024-11-10 10:01:11,321][00444] Collected {0: 4005888}, FPS: 3475.1 |
|
[2024-11-10 10:02:59,911][00444] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2024-11-10 10:02:59,913][00444] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2024-11-10 10:02:59,915][00444] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2024-11-10 10:02:59,917][00444] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2024-11-10 10:02:59,919][00444] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2024-11-10 10:02:59,921][00444] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2024-11-10 10:02:59,922][00444] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2024-11-10 10:02:59,925][00444] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2024-11-10 10:02:59,926][00444] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2024-11-10 10:02:59,928][00444] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2024-11-10 10:02:59,929][00444] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2024-11-10 10:02:59,930][00444] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2024-11-10 10:02:59,931][00444] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2024-11-10 10:02:59,932][00444] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2024-11-10 10:02:59,934][00444] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2024-11-10 10:02:59,952][00444] Doom resolution: 160x120, resize resolution: (128, 72) |
|
[2024-11-10 10:02:59,954][00444] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-11-10 10:02:59,956][00444] RunningMeanStd input shape: (1,) |
|
[2024-11-10 10:02:59,972][00444] ConvEncoder: input_channels=3 |
|
[2024-11-10 10:03:00,088][00444] Conv encoder output size: 512 |
|
[2024-11-10 10:03:00,090][00444] Policy head output size: 512 |
|
[2024-11-10 10:03:01,764][00444] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-11-10 10:03:02,611][00444] Num frames 100... |
|
[2024-11-10 10:03:02,729][00444] Num frames 200... |
|
[2024-11-10 10:03:02,861][00444] Num frames 300... |
|
[2024-11-10 10:03:02,985][00444] Num frames 400... |
|
[2024-11-10 10:03:03,099][00444] Num frames 500... |
|
[2024-11-10 10:03:03,223][00444] Num frames 600... |
|
[2024-11-10 10:03:03,340][00444] Num frames 700... |
|
[2024-11-10 10:03:03,463][00444] Num frames 800... |
|
[2024-11-10 10:03:03,584][00444] Num frames 900... |
|
[2024-11-10 10:03:03,704][00444] Num frames 1000... |
|
[2024-11-10 10:03:03,835][00444] Num frames 1100... |
|
[2024-11-10 10:03:03,966][00444] Num frames 1200... |
|
[2024-11-10 10:03:04,090][00444] Num frames 1300... |
|
[2024-11-10 10:03:04,212][00444] Num frames 1400... |
|
[2024-11-10 10:03:04,334][00444] Num frames 1500... |
|
[2024-11-10 10:03:04,456][00444] Num frames 1600... |
|
[2024-11-10 10:03:04,583][00444] Num frames 1700... |
|
[2024-11-10 10:03:04,704][00444] Num frames 1800... |
|
[2024-11-10 10:03:04,842][00444] Num frames 1900... |
|
[2024-11-10 10:03:05,002][00444] Num frames 2000... |
|
[2024-11-10 10:03:05,124][00444] Num frames 2100... |
|
[2024-11-10 10:03:05,178][00444] Avg episode rewards: #0: 57.999, true rewards: #0: 21.000 |
|
[2024-11-10 10:03:05,180][00444] Avg episode reward: 57.999, avg true_objective: 21.000 |
|
[2024-11-10 10:03:05,304][00444] Num frames 2200... |
|
[2024-11-10 10:03:05,473][00444] Num frames 2300... |
|
[2024-11-10 10:03:05,642][00444] Num frames 2400... |
|
[2024-11-10 10:03:05,804][00444] Num frames 2500... |
|
[2024-11-10 10:03:05,971][00444] Num frames 2600... |
|
[2024-11-10 10:03:06,133][00444] Num frames 2700... |
|
[2024-11-10 10:03:06,300][00444] Num frames 2800... |
|
[2024-11-10 10:03:06,462][00444] Num frames 2900... |
|
[2024-11-10 10:03:06,631][00444] Num frames 3000... |
|
[2024-11-10 10:03:06,807][00444] Num frames 3100... |
|
[2024-11-10 10:03:06,982][00444] Num frames 3200... |
|
[2024-11-10 10:03:07,202][00444] Avg episode rewards: #0: 45.959, true rewards: #0: 16.460 |
|
[2024-11-10 10:03:07,204][00444] Avg episode reward: 45.959, avg true_objective: 16.460 |
|
[2024-11-10 10:03:07,222][00444] Num frames 3300... |
|
[2024-11-10 10:03:07,393][00444] Num frames 3400... |
|
[2024-11-10 10:03:07,568][00444] Num frames 3500... |
|
[2024-11-10 10:03:07,743][00444] Num frames 3600... |
|
[2024-11-10 10:03:07,905][00444] Num frames 3700... |
|
[2024-11-10 10:03:08,039][00444] Num frames 3800... |
|
[2024-11-10 10:03:08,164][00444] Num frames 3900... |
|
[2024-11-10 10:03:08,286][00444] Num frames 4000... |
|
[2024-11-10 10:03:08,405][00444] Num frames 4100... |
|
[2024-11-10 10:03:08,524][00444] Num frames 4200... |
|
[2024-11-10 10:03:08,643][00444] Num frames 4300... |
|
[2024-11-10 10:03:08,734][00444] Avg episode rewards: #0: 38.763, true rewards: #0: 14.430 |
|
[2024-11-10 10:03:08,735][00444] Avg episode reward: 38.763, avg true_objective: 14.430 |
|
[2024-11-10 10:03:08,826][00444] Num frames 4400... |
|
[2024-11-10 10:03:08,946][00444] Num frames 4500... |
|
[2024-11-10 10:03:09,073][00444] Num frames 4600... |
|
[2024-11-10 10:03:09,198][00444] Num frames 4700... |
|
[2024-11-10 10:03:09,316][00444] Num frames 4800... |
|
[2024-11-10 10:03:09,436][00444] Num frames 4900... |
|
[2024-11-10 10:03:09,560][00444] Num frames 5000... |
|
[2024-11-10 10:03:09,679][00444] Num frames 5100... |
|
[2024-11-10 10:03:09,823][00444] Avg episode rewards: #0: 33.652, true rewards: #0: 12.902 |
|
[2024-11-10 10:03:09,825][00444] Avg episode reward: 33.652, avg true_objective: 12.902 |
|
[2024-11-10 10:03:09,871][00444] Num frames 5200... |
|
[2024-11-10 10:03:09,994][00444] Num frames 5300... |
|
[2024-11-10 10:03:10,121][00444] Num frames 5400... |
|
[2024-11-10 10:03:10,241][00444] Num frames 5500... |
|
[2024-11-10 10:03:10,362][00444] Num frames 5600... |
|
[2024-11-10 10:03:10,487][00444] Num frames 5700... |
|
[2024-11-10 10:03:10,608][00444] Num frames 5800... |
|
[2024-11-10 10:03:10,728][00444] Num frames 5900... |
|
[2024-11-10 10:03:10,855][00444] Num frames 6000... |
|
[2024-11-10 10:03:10,976][00444] Num frames 6100... |
|
[2024-11-10 10:03:11,102][00444] Num frames 6200... |
|
[2024-11-10 10:03:11,225][00444] Num frames 6300... |
|
[2024-11-10 10:03:11,345][00444] Num frames 6400... |
|
[2024-11-10 10:03:11,464][00444] Num frames 6500... |
|
[2024-11-10 10:03:11,586][00444] Num frames 6600... |
|
[2024-11-10 10:03:11,712][00444] Num frames 6700... |
|
[2024-11-10 10:03:11,841][00444] Num frames 6800... |
|
[2024-11-10 10:03:11,969][00444] Num frames 6900... |
|
[2024-11-10 10:03:12,089][00444] Num frames 7000... |
|
[2024-11-10 10:03:12,224][00444] Num frames 7100... |
|
[2024-11-10 10:03:12,347][00444] Num frames 7200... |
|
[2024-11-10 10:03:12,452][00444] Avg episode rewards: #0: 37.082, true rewards: #0: 14.482 |
|
[2024-11-10 10:03:12,453][00444] Avg episode reward: 37.082, avg true_objective: 14.482 |
|
[2024-11-10 10:03:12,526][00444] Num frames 7300... |
|
[2024-11-10 10:03:12,646][00444] Num frames 7400... |
|
[2024-11-10 10:03:12,771][00444] Num frames 7500... |
|
[2024-11-10 10:03:12,889][00444] Num frames 7600... |
|
[2024-11-10 10:03:13,007][00444] Num frames 7700... |
|
[2024-11-10 10:03:13,134][00444] Num frames 7800... |
|
[2024-11-10 10:03:13,257][00444] Num frames 7900... |
|
[2024-11-10 10:03:13,375][00444] Num frames 8000... |
|
[2024-11-10 10:03:13,481][00444] Avg episode rewards: #0: 33.568, true rewards: #0: 13.402 |
|
[2024-11-10 10:03:13,482][00444] Avg episode reward: 33.568, avg true_objective: 13.402 |
|
[2024-11-10 10:03:13,554][00444] Num frames 8100... |
|
[2024-11-10 10:03:13,676][00444] Num frames 8200... |
|
[2024-11-10 10:03:13,801][00444] Num frames 8300... |
|
[2024-11-10 10:03:13,921][00444] Num frames 8400... |
|
[2024-11-10 10:03:14,038][00444] Num frames 8500... |
|
[2024-11-10 10:03:14,127][00444] Avg episode rewards: #0: 29.897, true rewards: #0: 12.183 |
|
[2024-11-10 10:03:14,129][00444] Avg episode reward: 29.897, avg true_objective: 12.183 |
|
[2024-11-10 10:03:14,229][00444] Num frames 8600... |
|
[2024-11-10 10:03:14,350][00444] Num frames 8700... |
|
[2024-11-10 10:03:14,478][00444] Num frames 8800... |
|
[2024-11-10 10:03:14,599][00444] Num frames 8900... |
|
[2024-11-10 10:03:14,722][00444] Num frames 9000... |
|
[2024-11-10 10:03:14,851][00444] Num frames 9100... |
|
[2024-11-10 10:03:14,971][00444] Num frames 9200... |
|
[2024-11-10 10:03:15,097][00444] Num frames 9300... |
|
[2024-11-10 10:03:15,231][00444] Num frames 9400... |
|
[2024-11-10 10:03:15,355][00444] Num frames 9500... |
|
[2024-11-10 10:03:15,477][00444] Num frames 9600... |
|
[2024-11-10 10:03:15,599][00444] Num frames 9700... |
|
[2024-11-10 10:03:15,722][00444] Num frames 9800... |
|
[2024-11-10 10:03:15,856][00444] Num frames 9900... |
|
[2024-11-10 10:03:15,978][00444] Num frames 10000... |
|
[2024-11-10 10:03:16,100][00444] Num frames 10100... |
|
[2024-11-10 10:03:16,235][00444] Num frames 10200... |
|
[2024-11-10 10:03:16,353][00444] Num frames 10300... |
|
[2024-11-10 10:03:16,538][00444] Num frames 10400... |
|
[2024-11-10 10:03:16,717][00444] Num frames 10500... |
|
[2024-11-10 10:03:16,903][00444] Num frames 10600... |
|
[2024-11-10 10:03:17,012][00444] Avg episode rewards: #0: 32.910, true rewards: #0: 13.285 |
|
[2024-11-10 10:03:17,022][00444] Avg episode reward: 32.910, avg true_objective: 13.285 |
|
[2024-11-10 10:03:17,202][00444] Num frames 10700... |
|
[2024-11-10 10:03:17,410][00444] Num frames 10800... |
|
[2024-11-10 10:03:17,613][00444] Num frames 10900... |
|
[2024-11-10 10:03:17,802][00444] Num frames 11000... |
|
[2024-11-10 10:03:18,086][00444] Num frames 11100... |
|
[2024-11-10 10:03:18,560][00444] Num frames 11200... |
|
[2024-11-10 10:03:18,948][00444] Num frames 11300... |
|
[2024-11-10 10:03:19,109][00444] Num frames 11400... |
|
[2024-11-10 10:03:19,267][00444] Num frames 11500... |
|
[2024-11-10 10:03:19,452][00444] Num frames 11600... |
|
[2024-11-10 10:03:19,627][00444] Num frames 11700... |
|
[2024-11-10 10:03:19,797][00444] Num frames 11800... |
|
[2024-11-10 10:03:19,975][00444] Num frames 11900... |
|
[2024-11-10 10:03:20,155][00444] Num frames 12000... |
|
[2024-11-10 10:03:20,329][00444] Num frames 12100... |
|
[2024-11-10 10:03:20,510][00444] Num frames 12200... |
|
[2024-11-10 10:03:20,689][00444] Num frames 12300... |
|
[2024-11-10 10:03:20,823][00444] Num frames 12400... |
|
[2024-11-10 10:03:20,938][00444] Num frames 12500... |
|
[2024-11-10 10:03:21,095][00444] Avg episode rewards: #0: 35.200, true rewards: #0: 13.978 |
|
[2024-11-10 10:03:21,097][00444] Avg episode reward: 35.200, avg true_objective: 13.978 |
|
[2024-11-10 10:03:21,124][00444] Num frames 12600... |
|
[2024-11-10 10:03:21,246][00444] Num frames 12700... |
|
[2024-11-10 10:03:21,367][00444] Num frames 12800... |
|
[2024-11-10 10:03:21,494][00444] Num frames 12900... |
|
[2024-11-10 10:03:21,618][00444] Num frames 13000... |
|
[2024-11-10 10:03:21,738][00444] Num frames 13100... |
|
[2024-11-10 10:03:21,864][00444] Num frames 13200... |
|
[2024-11-10 10:03:21,989][00444] Num frames 13300... |
|
[2024-11-10 10:03:22,114][00444] Num frames 13400... |
|
[2024-11-10 10:03:22,239][00444] Num frames 13500... |
|
[2024-11-10 10:03:22,359][00444] Num frames 13600... |
|
[2024-11-10 10:03:22,524][00444] Avg episode rewards: #0: 34.191, true rewards: #0: 13.691 |
|
[2024-11-10 10:03:22,526][00444] Avg episode reward: 34.191, avg true_objective: 13.691 |
|
[2024-11-10 10:04:20,479][00444] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2024-11-10 10:05:10,578][00444] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2024-11-10 10:05:10,580][00444] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2024-11-10 10:05:10,582][00444] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2024-11-10 10:05:10,584][00444] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2024-11-10 10:05:10,586][00444] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2024-11-10 10:05:10,588][00444] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2024-11-10 10:05:10,589][00444] Adding new argument 'max_num_frames'=1000000000.0 that is not in the saved config file! |
|
[2024-11-10 10:05:10,590][00444] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2024-11-10 10:05:10,593][00444] Adding new argument 'push_to_hub'=False that is not in the saved config file! |
|
[2024-11-10 10:05:10,594][00444] Adding new argument 'hf_repository'=None that is not in the saved config file! |
|
[2024-11-10 10:05:10,597][00444] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2024-11-10 10:05:10,598][00444] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2024-11-10 10:05:10,599][00444] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2024-11-10 10:05:10,600][00444] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2024-11-10 10:05:10,603][00444] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2024-11-10 10:05:10,626][00444] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-11-10 10:05:10,628][00444] RunningMeanStd input shape: (1,) |
|
[2024-11-10 10:05:10,650][00444] ConvEncoder: input_channels=3 |
|
[2024-11-10 10:05:10,707][00444] Conv encoder output size: 512 |
|
[2024-11-10 10:05:10,709][00444] Policy head output size: 512 |
|
[2024-11-10 10:05:10,735][00444] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-11-10 10:05:11,450][00444] Num frames 100... |
|
[2024-11-10 10:05:11,624][00444] Num frames 200... |
|
[2024-11-10 10:05:11,768][00444] Num frames 300... |
|
[2024-11-10 10:05:11,884][00444] Num frames 400... |
|
[2024-11-10 10:05:12,003][00444] Num frames 500... |
|
[2024-11-10 10:05:12,124][00444] Num frames 600... |
|
[2024-11-10 10:05:12,240][00444] Num frames 700... |
|
[2024-11-10 10:05:12,358][00444] Num frames 800... |
|
[2024-11-10 10:05:12,476][00444] Num frames 900... |
|
[2024-11-10 10:05:12,565][00444] Avg episode rewards: #0: 20.280, true rewards: #0: 9.280 |
|
[2024-11-10 10:05:12,566][00444] Avg episode reward: 20.280, avg true_objective: 9.280 |
|
[2024-11-10 10:05:12,655][00444] Num frames 1000... |
|
[2024-11-10 10:05:12,784][00444] Num frames 1100... |
|
[2024-11-10 10:05:12,903][00444] Num frames 1200... |
|
[2024-11-10 10:05:13,020][00444] Num frames 1300... |
|
[2024-11-10 10:05:13,136][00444] Num frames 1400... |
|
[2024-11-10 10:05:13,254][00444] Num frames 1500... |
|
[2024-11-10 10:05:13,371][00444] Num frames 1600... |
|
[2024-11-10 10:05:13,490][00444] Num frames 1700... |
|
[2024-11-10 10:05:13,608][00444] Num frames 1800... |
|
[2024-11-10 10:05:13,725][00444] Num frames 1900... |
|
[2024-11-10 10:05:13,862][00444] Num frames 2000... |
|
[2024-11-10 10:05:14,021][00444] Avg episode rewards: #0: 24.455, true rewards: #0: 10.455 |
|
[2024-11-10 10:05:14,023][00444] Avg episode reward: 24.455, avg true_objective: 10.455 |
|
[2024-11-10 10:05:14,037][00444] Num frames 2100... |
|
[2024-11-10 10:05:14,155][00444] Num frames 2200... |
|
[2024-11-10 10:05:14,270][00444] Num frames 2300... |
|
[2024-11-10 10:05:14,389][00444] Num frames 2400... |
|
[2024-11-10 10:05:14,508][00444] Num frames 2500... |
|
[2024-11-10 10:05:14,631][00444] Num frames 2600... |
|
[2024-11-10 10:05:14,758][00444] Num frames 2700... |
|
[2024-11-10 10:05:14,918][00444] Avg episode rewards: #0: 21.263, true rewards: #0: 9.263 |
|
[2024-11-10 10:05:14,919][00444] Avg episode reward: 21.263, avg true_objective: 9.263 |
|
[2024-11-10 10:05:14,948][00444] Num frames 2800... |
|
[2024-11-10 10:05:15,067][00444] Num frames 2900... |
|
[2024-11-10 10:05:15,186][00444] Num frames 3000... |
|
[2024-11-10 10:05:15,304][00444] Num frames 3100... |
|
[2024-11-10 10:05:15,424][00444] Num frames 3200... |
|
[2024-11-10 10:05:15,542][00444] Num frames 3300... |
|
[2024-11-10 10:05:15,662][00444] Num frames 3400... |
|
[2024-11-10 10:05:15,790][00444] Num frames 3500... |
|
[2024-11-10 10:05:15,918][00444] Num frames 3600... |
|
[2024-11-10 10:05:15,988][00444] Avg episode rewards: #0: 21.028, true rewards: #0: 9.027 |
|
[2024-11-10 10:05:15,989][00444] Avg episode reward: 21.028, avg true_objective: 9.027 |
|
[2024-11-10 10:05:16,100][00444] Num frames 3700... |
|
[2024-11-10 10:05:16,221][00444] Num frames 3800... |
|
[2024-11-10 10:05:16,337][00444] Num frames 3900... |
|
[2024-11-10 10:05:16,460][00444] Num frames 4000... |
|
[2024-11-10 10:05:16,579][00444] Num frames 4100... |
|
[2024-11-10 10:05:16,701][00444] Num frames 4200... |
|
[2024-11-10 10:05:16,829][00444] Num frames 4300... |
|
[2024-11-10 10:05:16,969][00444] Num frames 4400... |
|
[2024-11-10 10:05:17,088][00444] Num frames 4500... |
|
[2024-11-10 10:05:17,249][00444] Avg episode rewards: #0: 21.778, true rewards: #0: 9.178 |
|
[2024-11-10 10:05:17,251][00444] Avg episode reward: 21.778, avg true_objective: 9.178 |
|
[2024-11-10 10:05:17,267][00444] Num frames 4600... |
|
[2024-11-10 10:05:17,385][00444] Num frames 4700... |
|
[2024-11-10 10:05:17,509][00444] Num frames 4800... |
|
[2024-11-10 10:05:17,627][00444] Num frames 4900... |
|
[2024-11-10 10:05:17,749][00444] Num frames 5000... |
|
[2024-11-10 10:05:17,889][00444] Num frames 5100... |
|
[2024-11-10 10:05:17,983][00444] Avg episode rewards: #0: 20.055, true rewards: #0: 8.555 |
|
[2024-11-10 10:05:17,985][00444] Avg episode reward: 20.055, avg true_objective: 8.555 |
|
[2024-11-10 10:05:18,070][00444] Num frames 5200... |
|
[2024-11-10 10:05:18,193][00444] Num frames 5300... |
|
[2024-11-10 10:05:18,312][00444] Num frames 5400... |
|
[2024-11-10 10:05:18,444][00444] Num frames 5500... |
|
[2024-11-10 10:05:18,566][00444] Num frames 5600... |
|
[2024-11-10 10:05:18,686][00444] Num frames 5700... |
|
[2024-11-10 10:05:18,823][00444] Num frames 5800... |
|
[2024-11-10 10:05:18,947][00444] Num frames 5900... |
|
[2024-11-10 10:05:19,065][00444] Num frames 6000... |
|
[2024-11-10 10:05:19,191][00444] Num frames 6100... |
|
[2024-11-10 10:05:19,313][00444] Num frames 6200... |
|
[2024-11-10 10:05:19,433][00444] Num frames 6300... |
|
[2024-11-10 10:05:19,555][00444] Num frames 6400... |
|
[2024-11-10 10:05:19,675][00444] Num frames 6500... |
|
[2024-11-10 10:05:19,803][00444] Num frames 6600... |
|
[2024-11-10 10:05:19,931][00444] Num frames 6700... |
|
[2024-11-10 10:05:20,029][00444] Avg episode rewards: #0: 22.336, true rewards: #0: 9.621 |
|
[2024-11-10 10:05:20,030][00444] Avg episode reward: 22.336, avg true_objective: 9.621 |
|
[2024-11-10 10:05:20,109][00444] Num frames 6800... |
|
[2024-11-10 10:05:20,226][00444] Num frames 6900... |
|
[2024-11-10 10:05:20,346][00444] Num frames 7000... |
|
[2024-11-10 10:05:20,468][00444] Num frames 7100... |
|
[2024-11-10 10:05:20,586][00444] Num frames 7200... |
|
[2024-11-10 10:05:20,698][00444] Avg episode rewards: #0: 20.559, true rewards: #0: 9.059 |
|
[2024-11-10 10:05:20,699][00444] Avg episode reward: 20.559, avg true_objective: 9.059 |
|
[2024-11-10 10:05:20,768][00444] Num frames 7300... |
|
[2024-11-10 10:05:20,889][00444] Num frames 7400... |
|
[2024-11-10 10:05:21,010][00444] Num frames 7500... |
|
[2024-11-10 10:05:21,126][00444] Num frames 7600... |
|
[2024-11-10 10:05:21,241][00444] Num frames 7700... |
|
[2024-11-10 10:05:21,364][00444] Num frames 7800... |
|
[2024-11-10 10:05:21,483][00444] Avg episode rewards: #0: 19.283, true rewards: #0: 8.728 |
|
[2024-11-10 10:05:21,486][00444] Avg episode reward: 19.283, avg true_objective: 8.728 |
|
[2024-11-10 10:05:21,538][00444] Num frames 7900... |
|
[2024-11-10 10:05:21,658][00444] Num frames 8000... |
|
[2024-11-10 10:05:21,833][00444] Num frames 8100... |
|
[2024-11-10 10:05:22,006][00444] Num frames 8200... |
|
[2024-11-10 10:05:22,167][00444] Num frames 8300... |
|
[2024-11-10 10:05:22,326][00444] Num frames 8400... |
|
[2024-11-10 10:05:22,486][00444] Num frames 8500... |
|
[2024-11-10 10:05:22,655][00444] Num frames 8600... |
|
[2024-11-10 10:05:22,825][00444] Num frames 8700... |
|
[2024-11-10 10:05:22,985][00444] Num frames 8800... |
|
[2024-11-10 10:05:23,162][00444] Num frames 8900... |
|
[2024-11-10 10:05:23,326][00444] Num frames 9000... |
|
[2024-11-10 10:05:23,498][00444] Num frames 9100... |
|
[2024-11-10 10:05:23,677][00444] Num frames 9200... |
|
[2024-11-10 10:05:23,854][00444] Num frames 9300... |
|
[2024-11-10 10:05:24,020][00444] Num frames 9400... |
|
[2024-11-10 10:05:24,202][00444] Num frames 9500... |
|
[2024-11-10 10:05:24,332][00444] Avg episode rewards: #0: 21.448, true rewards: #0: 9.548 |
|
[2024-11-10 10:05:24,334][00444] Avg episode reward: 21.448, avg true_objective: 9.548 |
|
[2024-11-10 10:06:18,542][00444] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
[2024-11-10 10:07:56,362][00444] Loading existing experiment configuration from /content/train_dir/default_experiment/config.json |
|
[2024-11-10 10:07:56,364][00444] Overriding arg 'num_workers' with value 1 passed from command line |
|
[2024-11-10 10:07:56,366][00444] Adding new argument 'no_render'=True that is not in the saved config file! |
|
[2024-11-10 10:07:56,368][00444] Adding new argument 'save_video'=True that is not in the saved config file! |
|
[2024-11-10 10:07:56,370][00444] Adding new argument 'video_frames'=1000000000.0 that is not in the saved config file! |
|
[2024-11-10 10:07:56,371][00444] Adding new argument 'video_name'=None that is not in the saved config file! |
|
[2024-11-10 10:07:56,373][00444] Adding new argument 'max_num_frames'=100000 that is not in the saved config file! |
|
[2024-11-10 10:07:56,375][00444] Adding new argument 'max_num_episodes'=10 that is not in the saved config file! |
|
[2024-11-10 10:07:56,376][00444] Adding new argument 'push_to_hub'=True that is not in the saved config file! |
|
[2024-11-10 10:07:56,377][00444] Adding new argument 'hf_repository'='T0W1/rl_course_vizdoom_health_gathering_supreme' that is not in the saved config file! |
|
[2024-11-10 10:07:56,380][00444] Adding new argument 'policy_index'=0 that is not in the saved config file! |
|
[2024-11-10 10:07:56,381][00444] Adding new argument 'eval_deterministic'=False that is not in the saved config file! |
|
[2024-11-10 10:07:56,383][00444] Adding new argument 'train_script'=None that is not in the saved config file! |
|
[2024-11-10 10:07:56,384][00444] Adding new argument 'enjoy_script'=None that is not in the saved config file! |
|
[2024-11-10 10:07:56,385][00444] Using frameskip 1 and render_action_repeat=4 for evaluation |
|
[2024-11-10 10:07:56,394][00444] RunningMeanStd input shape: (3, 72, 128) |
|
[2024-11-10 10:07:56,395][00444] RunningMeanStd input shape: (1,) |
|
[2024-11-10 10:07:56,411][00444] ConvEncoder: input_channels=3 |
|
[2024-11-10 10:07:56,447][00444] Conv encoder output size: 512 |
|
[2024-11-10 10:07:56,449][00444] Policy head output size: 512 |
|
[2024-11-10 10:07:56,468][00444] Loading state from checkpoint /content/train_dir/default_experiment/checkpoint_p0/checkpoint_000000978_4005888.pth... |
|
[2024-11-10 10:07:56,954][00444] Num frames 100... |
|
[2024-11-10 10:07:57,072][00444] Num frames 200... |
|
[2024-11-10 10:07:57,190][00444] Num frames 300... |
|
[2024-11-10 10:07:57,317][00444] Num frames 400... |
|
[2024-11-10 10:07:57,437][00444] Num frames 500... |
|
[2024-11-10 10:07:57,555][00444] Num frames 600... |
|
[2024-11-10 10:07:57,673][00444] Num frames 700... |
|
[2024-11-10 10:07:57,795][00444] Num frames 800... |
|
[2024-11-10 10:07:57,913][00444] Num frames 900... |
|
[2024-11-10 10:07:58,032][00444] Num frames 1000... |
|
[2024-11-10 10:07:58,151][00444] Num frames 1100... |
|
[2024-11-10 10:07:58,271][00444] Num frames 1200... |
|
[2024-11-10 10:07:58,391][00444] Num frames 1300... |
|
[2024-11-10 10:07:58,508][00444] Num frames 1400... |
|
[2024-11-10 10:07:58,650][00444] Num frames 1500... |
|
[2024-11-10 10:07:58,821][00444] Num frames 1600... |
|
[2024-11-10 10:07:59,007][00444] Num frames 1700... |
|
[2024-11-10 10:07:59,171][00444] Num frames 1800... |
|
[2024-11-10 10:07:59,299][00444] Avg episode rewards: #0: 46.450, true rewards: #0: 18.450 |
|
[2024-11-10 10:07:59,301][00444] Avg episode reward: 46.450, avg true_objective: 18.450 |
|
[2024-11-10 10:07:59,389][00444] Num frames 1900... |
|
[2024-11-10 10:07:59,552][00444] Num frames 2000... |
|
[2024-11-10 10:07:59,710][00444] Num frames 2100... |
|
[2024-11-10 10:07:59,876][00444] Num frames 2200... |
|
[2024-11-10 10:08:00,064][00444] Num frames 2300... |
|
[2024-11-10 10:08:00,235][00444] Num frames 2400... |
|
[2024-11-10 10:08:00,408][00444] Num frames 2500... |
|
[2024-11-10 10:08:00,570][00444] Num frames 2600... |
|
[2024-11-10 10:08:00,736][00444] Num frames 2700... |
|
[2024-11-10 10:08:00,929][00444] Avg episode rewards: #0: 32.865, true rewards: #0: 13.865 |
|
[2024-11-10 10:08:00,932][00444] Avg episode reward: 32.865, avg true_objective: 13.865 |
|
[2024-11-10 10:08:00,977][00444] Num frames 2800... |
|
[2024-11-10 10:08:01,154][00444] Num frames 2900... |
|
[2024-11-10 10:08:01,279][00444] Num frames 3000... |
|
[2024-11-10 10:08:01,403][00444] Num frames 3100... |
|
[2024-11-10 10:08:01,523][00444] Num frames 3200... |
|
[2024-11-10 10:08:01,649][00444] Num frames 3300... |
|
[2024-11-10 10:08:01,774][00444] Num frames 3400... |
|
[2024-11-10 10:08:01,882][00444] Avg episode rewards: #0: 27.133, true rewards: #0: 11.467 |
|
[2024-11-10 10:08:01,884][00444] Avg episode reward: 27.133, avg true_objective: 11.467 |
|
[2024-11-10 10:08:01,959][00444] Num frames 3500... |
|
[2024-11-10 10:08:02,085][00444] Num frames 3600... |
|
[2024-11-10 10:08:02,208][00444] Num frames 3700... |
|
[2024-11-10 10:08:02,330][00444] Num frames 3800... |
|
[2024-11-10 10:08:02,413][00444] Avg episode rewards: #0: 21.810, true rewards: #0: 9.560 |
|
[2024-11-10 10:08:02,414][00444] Avg episode reward: 21.810, avg true_objective: 9.560 |
|
[2024-11-10 10:08:02,504][00444] Num frames 3900... |
|
[2024-11-10 10:08:02,630][00444] Num frames 4000... |
|
[2024-11-10 10:08:02,752][00444] Num frames 4100... |
|
[2024-11-10 10:08:02,876][00444] Num frames 4200... |
|
[2024-11-10 10:08:02,995][00444] Num frames 4300... |
|
[2024-11-10 10:08:03,120][00444] Num frames 4400... |
|
[2024-11-10 10:08:03,243][00444] Num frames 4500... |
|
[2024-11-10 10:08:03,365][00444] Num frames 4600... |
|
[2024-11-10 10:08:03,491][00444] Num frames 4700... |
|
[2024-11-10 10:08:03,610][00444] Num frames 4800... |
|
[2024-11-10 10:08:03,731][00444] Num frames 4900... |
|
[2024-11-10 10:08:03,854][00444] Num frames 5000... |
|
[2024-11-10 10:08:03,974][00444] Num frames 5100... |
|
[2024-11-10 10:08:04,091][00444] Num frames 5200... |
|
[2024-11-10 10:08:04,217][00444] Num frames 5300... |
|
[2024-11-10 10:08:04,344][00444] Num frames 5400... |
|
[2024-11-10 10:08:04,464][00444] Num frames 5500... |
|
[2024-11-10 10:08:04,584][00444] Num frames 5600... |
|
[2024-11-10 10:08:04,706][00444] Num frames 5700... |
|
[2024-11-10 10:08:04,833][00444] Num frames 5800... |
|
[2024-11-10 10:08:04,952][00444] Num frames 5900... |
|
[2024-11-10 10:08:05,038][00444] Avg episode rewards: #0: 29.048, true rewards: #0: 11.848 |
|
[2024-11-10 10:08:05,039][00444] Avg episode reward: 29.048, avg true_objective: 11.848 |
|
[2024-11-10 10:08:05,131][00444] Num frames 6000... |
|
[2024-11-10 10:08:05,260][00444] Num frames 6100... |
|
[2024-11-10 10:08:05,380][00444] Num frames 6200... |
|
[2024-11-10 10:08:05,496][00444] Num frames 6300... |
|
[2024-11-10 10:08:05,621][00444] Num frames 6400... |
|
[2024-11-10 10:08:05,738][00444] Num frames 6500... |
|
[2024-11-10 10:08:05,867][00444] Num frames 6600... |
|
[2024-11-10 10:08:05,988][00444] Num frames 6700... |
|
[2024-11-10 10:08:06,143][00444] Num frames 6800... |
|
[2024-11-10 10:08:06,271][00444] Num frames 6900... |
|
[2024-11-10 10:08:06,394][00444] Num frames 7000... |
|
[2024-11-10 10:08:06,518][00444] Num frames 7100... |
|
[2024-11-10 10:08:06,644][00444] Num frames 7200... |
|
[2024-11-10 10:08:06,774][00444] Num frames 7300... |
|
[2024-11-10 10:08:06,893][00444] Num frames 7400... |
|
[2024-11-10 10:08:07,012][00444] Num frames 7500... |
|
[2024-11-10 10:08:07,173][00444] Avg episode rewards: #0: 31.646, true rewards: #0: 12.647 |
|
[2024-11-10 10:08:07,175][00444] Avg episode reward: 31.646, avg true_objective: 12.647 |
|
[2024-11-10 10:08:07,202][00444] Num frames 7600... |
|
[2024-11-10 10:08:07,323][00444] Num frames 7700... |
|
[2024-11-10 10:08:07,451][00444] Num frames 7800... |
|
[2024-11-10 10:08:07,567][00444] Num frames 7900... |
|
[2024-11-10 10:08:07,687][00444] Num frames 8000... |
|
[2024-11-10 10:08:07,829][00444] Avg episode rewards: #0: 28.811, true rewards: #0: 11.526 |
|
[2024-11-10 10:08:07,830][00444] Avg episode reward: 28.811, avg true_objective: 11.526 |
|
[2024-11-10 10:08:07,872][00444] Num frames 8100... |
|
[2024-11-10 10:08:07,993][00444] Num frames 8200... |
|
[2024-11-10 10:08:08,111][00444] Num frames 8300... |
|
[2024-11-10 10:08:08,242][00444] Num frames 8400... |
|
[2024-11-10 10:08:08,369][00444] Num frames 8500... |
|
[2024-11-10 10:08:08,489][00444] Num frames 8600... |
|
[2024-11-10 10:08:08,610][00444] Num frames 8700... |
|
[2024-11-10 10:08:08,730][00444] Num frames 8800... |
|
[2024-11-10 10:08:08,858][00444] Num frames 8900... |
|
[2024-11-10 10:08:08,956][00444] Avg episode rewards: #0: 27.167, true rewards: #0: 11.167 |
|
[2024-11-10 10:08:08,958][00444] Avg episode reward: 27.167, avg true_objective: 11.167 |
|
[2024-11-10 10:08:09,043][00444] Num frames 9000... |
|
[2024-11-10 10:08:09,168][00444] Num frames 9100... |
|
[2024-11-10 10:08:09,299][00444] Num frames 9200... |
|
[2024-11-10 10:08:09,427][00444] Num frames 9300... |
|
[2024-11-10 10:08:09,547][00444] Num frames 9400... |
|
[2024-11-10 10:08:09,669][00444] Num frames 9500... |
|
[2024-11-10 10:08:09,795][00444] Num frames 9600... |
|
[2024-11-10 10:08:09,913][00444] Num frames 9700... |
|
[2024-11-10 10:08:10,045][00444] Num frames 9800... |
|
[2024-11-10 10:08:10,167][00444] Num frames 9900... |
|
[2024-11-10 10:08:10,344][00444] Avg episode rewards: #0: 26.655, true rewards: #0: 11.100 |
|
[2024-11-10 10:08:10,346][00444] Avg episode reward: 26.655, avg true_objective: 11.100 |
|
[2024-11-10 10:08:10,362][00444] Num frames 10000... |
|
[2024-11-10 10:08:10,481][00444] Num frames 10100... |
|
[2024-11-10 10:08:10,598][00444] Num frames 10200... |
|
[2024-11-10 10:08:10,715][00444] Num frames 10300... |
|
[2024-11-10 10:08:10,846][00444] Num frames 10400... |
|
[2024-11-10 10:08:10,961][00444] Num frames 10500... |
|
[2024-11-10 10:08:11,090][00444] Num frames 10600... |
|
[2024-11-10 10:08:11,244][00444] Num frames 10700... |
|
[2024-11-10 10:08:11,439][00444] Num frames 10800... |
|
[2024-11-10 10:08:11,606][00444] Num frames 10900... |
|
[2024-11-10 10:08:11,698][00444] Avg episode rewards: #0: 26.318, true rewards: #0: 10.918 |
|
[2024-11-10 10:08:11,703][00444] Avg episode reward: 26.318, avg true_objective: 10.918 |
|
[2024-11-10 10:09:13,841][00444] Replay video saved to /content/train_dir/default_experiment/replay.mp4! |
|
|