XTTS-v2.0-Joe-Pera / trainer_0_log.txt
CoderCowMoo's picture
Upload model, config, vocab and training log
950d2dd
> Training Environment:
| > Backend: Torch
| > Mixed precision: False
| > Precision: float32
| > Current device: 0
| > Num. of GPUs: 1
| > Num. of CPUs: 2
| > Num. of Torch Threads: 1
| > Torch seed: 1
| > Torch CUDNN: True
| > Torch CUDNN deterministic: False
| > Torch CUDNN benchmark: False
| > Torch TF32 MatMul: False
> Start Tensorboard: tensorboard --logdir=/tmp/xtts_ft/run/training/GPT_XTTS_FT-December-04-2023_03+00PM-0000000
> Model has 518442047 parameters
 > EPOCH: 0/8
--> /tmp/xtts_ft/run/training/GPT_XTTS_FT-December-04-2023_03+00PM-0000000
 > TRAINING (2023-12-04 15:00:11) 
 --> TIME: 2023-12-04 15:00:19 -- STEP: 0/40 -- GLOBAL_STEP: 0
| > loss_text_ce: 0.021964602172374725 (0.021964602172374725)
| > loss_mel_ce: 4.907757759094238 (4.907757759094238)
| > loss: 4.929722309112549 (4.929722309112549)
| > grad_norm: 0 (0)
| > current_lr: 5e-06
| > step_time: 1.4868 (1.4868354797363281)
| > loader_time: 6.5441 (6.544092655181885)
 > EVALUATION 
--> EVAL PERFORMANCE
| > avg_loader_time: 0.07195439338684081 (+0)
| > avg_loss_text_ce: 0.021994752064347266 (+0)
| > avg_loss_mel_ce: 3.3762893676757812 (+0)
| > avg_loss: 3.398284101486206 (+0)
> BEST MODEL : /tmp/xtts_ft/run/training/GPT_XTTS_FT-December-04-2023_03+00PM-0000000/best_model_40.pth
 > EPOCH: 1/8
--> /tmp/xtts_ft/run/training/GPT_XTTS_FT-December-04-2023_03+00PM-0000000
 > TRAINING (2023-12-04 15:02:56) 
 --> TIME: 2023-12-04 15:03:03 -- STEP: 10/40 -- GLOBAL_STEP: 50
| > loss_text_ce: 0.023661285638809204 (0.023023789189755915)
| > loss_mel_ce: 3.8315794467926025 (3.305363488197327)
| > loss: 3.855240821838379 (3.328387236595154)
| > grad_norm: 0 (0.0)
| > current_lr: 5e-06
| > step_time: 0.2479 (0.20201985836029052)
| > loader_time: 0.0221 (0.017214274406433104)
 > EVALUATION 
--> EVAL PERFORMANCE
| > avg_loader_time: 0.1108086109161377 (+0.03885421752929688)
| > avg_loss_text_ce: 0.021816403046250342 (-0.00017834901809692452)
| > avg_loss_mel_ce: 3.2666261196136475 (-0.10966324806213379)
| > avg_loss: 3.288442516326904 (-0.10984158515930176)
> BEST MODEL : /tmp/xtts_ft/run/training/GPT_XTTS_FT-December-04-2023_03+00PM-0000000/best_model_80.pth
 > EPOCH: 2/8
--> /tmp/xtts_ft/run/training/GPT_XTTS_FT-December-04-2023_03+00PM-0000000
 > TRAINING (2023-12-04 15:06:18) 
 --> TIME: 2023-12-04 15:06:33 -- STEP: 20/40 -- GLOBAL_STEP: 100
| > loss_text_ce: 0.017979152500629425 (0.02109259101562202)
| > loss_mel_ce: 3.1569440364837646 (2.9896609008312227)
| > loss: 3.1749231815338135 (3.0107534766197204)
| > grad_norm: 0 (0.0)
| > current_lr: 5e-06
| > step_time: 0.1982 (0.25715711116790774)
| > loader_time: 0.0208 (0.015216124057769776)
 > EVALUATION 
--> EVAL PERFORMANCE
| > avg_loader_time: 0.06513414382934571 (-0.04567446708679199)
| > avg_loss_text_ce: 0.021773791685700417 (-4.2611360549924676e-05)
| > avg_loss_mel_ce: 3.2272082805633544 (-0.03941783905029306)
| > avg_loss: 3.248982048034668 (-0.03946046829223615)
> BEST MODEL : /tmp/xtts_ft/run/training/GPT_XTTS_FT-December-04-2023_03+00PM-0000000/best_model_120.pth
 > EPOCH: 3/8
--> /tmp/xtts_ft/run/training/GPT_XTTS_FT-December-04-2023_03+00PM-0000000
 > TRAINING (2023-12-04 15:10:08) 
 --> TIME: 2023-12-04 15:10:26 -- STEP: 30/40 -- GLOBAL_STEP: 150
| > loss_text_ce: 0.021550316363573074 (0.020829477223257224)
| > loss_mel_ce: 3.5817322731018066 (2.8207703987757364)
| > loss: 3.6032826900482178 (2.8415998578071595)
| > grad_norm: 0 (0.0)
| > current_lr: 5e-06
| > step_time: 0.2316 (0.26780527432759604)
| > loader_time: 0.0088 (0.015432175000508625)
 > EVALUATION 
--> EVAL PERFORMANCE
| > avg_loader_time: 0.12182736396789551 (+0.0566932201385498)
| > avg_loss_text_ce: 0.021623440831899644 (-0.00015035085380077362)
| > avg_loss_mel_ce: 3.205338716506958 (-0.021869564056396396)
| > avg_loss: 3.226962184906006 (-0.02201986312866211)
> BEST MODEL : /tmp/xtts_ft/run/training/GPT_XTTS_FT-December-04-2023_03+00PM-0000000/best_model_160.pth
 > EPOCH: 4/8
--> /tmp/xtts_ft/run/training/GPT_XTTS_FT-December-04-2023_03+00PM-0000000
 > TRAINING (2023-12-04 15:13:43) 
 > EVALUATION 
--> EVAL PERFORMANCE
| > avg_loader_time: 0.06858286857604981 (-0.0532444953918457)
| > avg_loss_text_ce: 0.021538139879703523 (-8.530095219612052e-05)
| > avg_loss_mel_ce: 3.19382529258728 (-0.011513423919677912)
| > avg_loss: 3.2153634548187258 (-0.011598730087280185)
> BEST MODEL : /tmp/xtts_ft/run/training/GPT_XTTS_FT-December-04-2023_03+00PM-0000000/best_model_200.pth
 > EPOCH: 5/8
--> /tmp/xtts_ft/run/training/GPT_XTTS_FT-December-04-2023_03+00PM-0000000
 > TRAINING (2023-12-04 15:17:08) 
 --> TIME: 2023-12-04 15:17:10 -- STEP: 0/40 -- GLOBAL_STEP: 200
| > loss_text_ce: 0.023776765912771225 (0.023776765912771225)
| > loss_mel_ce: 2.0250589847564697 (2.0250589847564697)
| > loss: 2.0488357543945312 (2.0488357543945312)
| > grad_norm: 0 (0)
| > current_lr: 5e-06
| > step_time: 0.9392 (0.9391729831695557)
| > loader_time: 1.0643 (1.0642502307891846)
 > EVALUATION 
--> EVAL PERFORMANCE
| > avg_loader_time: 0.06770339012145996 (-0.0008794784545898549)
| > avg_loss_text_ce: 0.02152172140777111 (-1.641847193241397e-05)
| > avg_loss_mel_ce: 3.21250319480896 (+0.018677902221679865)
| > avg_loss: 3.234024906158447 (+0.018661451339721413)
 > EPOCH: 6/8
--> /tmp/xtts_ft/run/training/GPT_XTTS_FT-December-04-2023_03+00PM-0000000
 > TRAINING (2023-12-04 15:17:31) 
 --> TIME: 2023-12-04 15:17:40 -- STEP: 10/40 -- GLOBAL_STEP: 250
| > loss_text_ce: 0.018674146384000778 (0.021112211793661118)
| > loss_mel_ce: 2.7348833084106445 (2.436060166358948)
| > loss: 2.7535574436187744 (2.457172393798828)
| > grad_norm: 0 (0.0)
| > current_lr: 5e-06
| > step_time: 0.1765 (0.2836411952972412)
| > loader_time: 0.0086 (0.01687464714050293)
 > EVALUATION 
--> EVAL PERFORMANCE
| > avg_loader_time: 0.06701068878173828 (-0.0006927013397216714)
| > avg_loss_text_ce: 0.021476111933588983 (-4.560947418212613e-05)
| > avg_loss_mel_ce: 3.206305408477783 (-0.006197786331176847)
| > avg_loss: 3.2277815341949463 (-0.006243371963500888)
 > EPOCH: 7/8
--> /tmp/xtts_ft/run/training/GPT_XTTS_FT-December-04-2023_03+00PM-0000000
 > TRAINING (2023-12-04 15:17:57) 
 --> TIME: 2023-12-04 15:18:11 -- STEP: 20/40 -- GLOBAL_STEP: 300
| > loss_text_ce: 0.023174753412604332 (0.019879171112552285)
| > loss_mel_ce: 3.1051435470581055 (2.4108093440532685)
| > loss: 3.1283183097839355 (2.430688518285751)
| > grad_norm: 0 (0.0)
| > current_lr: 5e-06
| > step_time: 0.27 (0.29051125049591064)
| > loader_time: 0.0138 (0.01517837047576904)
 > EVALUATION 
--> EVAL PERFORMANCE
| > avg_loader_time: 0.11030998229980468 (+0.0432992935180664)
| > avg_loss_text_ce: 0.02148539908230305 (+9.287148714065552e-06)
| > avg_loss_mel_ce: 3.223378849029541 (+0.01707344055175808)
| > avg_loss: 3.244864273071289 (+0.017082738876342596)