--- license: apache-2.0 base_model: google/t5-v1_1-large tags: - generated_from_trainer model-index: - name: ghc-google-t5-v1_1-large-inter_model-frequency-model_annots_str results: [] --- # ghc-google-t5-v1_1-large-inter_model-frequency-model_annots_str This model is a fine-tuned version of [google/t5-v1_1-large](https://huggingface.co/google/t5-v1_1-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3108 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.3989 | 1.0 | 689 | 5.7253 | | 0.3833 | 2.0 | 1378 | 0.3076 | | 0.3352 | 3.0 | 2067 | 0.2992 | | 0.3233 | 4.0 | 2756 | 0.2986 | | 0.3203 | 5.0 | 3445 | 0.2983 | | 0.3409 | 6.0 | 4134 | 0.3035 | | 0.3078 | 7.0 | 4823 | 0.2952 | | 0.3021 | 8.0 | 5512 | 0.2973 | | 0.3136 | 9.0 | 6201 | 0.2968 | | 0.3297 | 10.0 | 6890 | 0.2979 | ### Framework versions - Transformers 4.34.0 - Pytorch 2.1.0+cu121 - Datasets 2.6.1 - Tokenizers 0.14.1