GIZ
/

ppsingh's picture
Update README.md
9305c5f verified
|
raw
history blame
4.04 kB
metadata
license: apache-2.0
base_model: climatebert/distilroberta-base-climate-f
tags:
  - generated_from_trainer
model-index:
  - name: ADAPMIT-multilabel-climatebert
    results: []
datasets:
  - GIZ/policy_classification
co2_eq_emissions:
  emissions: 23.3572576873636
  source: codecarbon
  training_type: fine-tuning
  on_cloud: true
  cpu_model: Intel(R) Xeon(R) CPU @ 2.00GHz
  ram_total_size: 12.6747894287109
  hours_used: 0.529
  hardware_used: 1 x Tesla T4

ADAPMIT-multilabel-climatebert

This model is a fine-tuned version of climatebert/distilroberta-base-climate-f on the Policy-Classification dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3535
  • Precision-micro: 0.8999
  • Precision-samples: 0.8559
  • Precision-weighted: 0.9001
  • Recall-micro: 0.9173
  • Recall-samples: 0.8592
  • Recall-weighted: 0.9173
  • F1-micro: 0.9085
  • F1-samples: 0.8521
  • F1-weighted: 0.9085

Model description

The purpose of this model is to predict multiple labels simultaneously from a given input data. Specifically, the model will predict 2 labels - AdaptationLabel, MitigationLabel - that are relevant to a particular task or application

Intended uses & limitations

More information needed

Training and evaluation data

  • Training Dataset: 10031

    Class Positive Count of Class
    Action 5416
    Plans 2140
    Policy 1396
    Target 2911
  • Validation Dataset: 932

    Class Positive Count of Class
    Action 513
    Plans 198
    Policy 122
    Target 256

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 6.03e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 300
  • num_epochs: 5

Training results

Training Loss Epoch Step Validation Loss Precision-micro Precision-samples Precision-weighted Recall-micro Recall-samples Recall-weighted F1-micro F1-samples F1-weighted
0.3512 1.0 784 0.3253 0.8530 0.8273 0.8572 0.8883 0.8311 0.8883 0.8703 0.8238 0.8703
0.2152 2.0 1568 0.2604 0.8999 0.8580 0.9002 0.9094 0.8521 0.9094 0.9046 0.8510 0.9046
0.1348 3.0 2352 0.2908 0.9038 0.8626 0.9059 0.9173 0.8588 0.9173 0.9105 0.8566 0.9107
0.0767 4.0 3136 0.3367 0.8999 0.8563 0.9000 0.9173 0.8588 0.9173 0.9085 0.8524 0.9085
0.0475 5.0 3920 0.3535 0.8999 0.8559 0.9001 0.9173 0.8592 0.9173 0.9085 0.8521 0.9085
label precision recall f1-score support
Action 0.828 0.807 0.817 513.0
Plans 0.560 0.707 0.625 198.0
Policy 0.727 0.786 0.756 122.0
Target 0.741 0.886 0.808 256.0

Framework versions

  • Transformers 4.38.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2