aruca's picture
End of training
8a38247 verified
|
raw
history blame
3.48 kB
metadata
base_model: google/pegasus-x-base
tags:
  - generated_from_trainer
model-index:
  - name: pegasus_x-meeting-summarizer
    results: []

pegasus_x-meeting-summarizer

This model is a fine-tuned version of google/pegasus-x-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3661

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 35

Training results

Training Loss Epoch Step Validation Loss
5.4832 0.8 10 4.4248
5.3639 1.6 20 4.0621
5.1864 2.4 30 3.6767
4.6617 3.2 40 3.5090
4.2848 4.0 50 3.5086
4.1322 4.8 60 3.4132
3.8127 5.6 70 3.2068
3.7671 6.4 80 3.0280
3.4976 7.2 90 2.8873
3.3087 8.0 100 2.7660
3.2034 8.8 110 2.6335
2.9382 9.6 120 2.5135
2.8012 10.4 130 2.4030
2.7161 11.2 140 2.3023
2.5522 12.0 150 2.2041
2.3935 12.8 160 2.0972
2.4131 13.6 170 2.0091
2.1511 14.4 180 1.9461
2.0641 15.2 190 1.8887
2.0721 16.0 200 1.8338
1.939 16.8 210 1.7876
1.9375 17.6 220 1.7321
1.7973 18.4 230 1.6807
1.6928 19.2 240 1.6474
1.681 20.0 250 1.6095
1.5794 20.8 260 1.5739
1.6063 21.6 270 1.5468
1.4721 22.4 280 1.5176
1.4457 23.2 290 1.4911
1.378 24.0 300 1.4885
1.381 24.8 310 1.4602
1.3508 25.6 320 1.4370
1.1869 26.4 330 1.4257
1.1638 27.2 340 1.4187
1.1851 28.0 350 1.4091
1.1463 28.8 360 1.4070
1.1034 29.6 370 1.3968
1.0144 30.4 380 1.3851
1.0436 31.2 390 1.3780
0.9692 32.0 400 1.3748
0.9588 32.8 410 1.3831
0.9216 33.6 420 1.3661

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.17.1
  • Tokenizers 0.15.2