Kevincp560 commited on
Commit
ed7252e
·
1 Parent(s): d4f3569

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -0
README.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - pub_med_summarization_dataset
7
+ metrics:
8
+ - rouge
9
+ model-index:
10
+ - name: distilbart-xsum-12-1-finetuned-pubmed
11
+ results:
12
+ - task:
13
+ name: Sequence-to-sequence Language Modeling
14
+ type: text2text-generation
15
+ dataset:
16
+ name: pub_med_summarization_dataset
17
+ type: pub_med_summarization_dataset
18
+ args: document
19
+ metrics:
20
+ - name: Rouge1
21
+ type: rouge
22
+ value: 27.0012
23
+ ---
24
+
25
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
26
+ should probably proofread and complete it, then remove this comment. -->
27
+
28
+ # distilbart-xsum-12-1-finetuned-pubmed
29
+
30
+ This model is a fine-tuned version of [sshleifer/distilbart-xsum-12-1](https://huggingface.co/sshleifer/distilbart-xsum-12-1) on the pub_med_summarization_dataset dataset.
31
+ It achieves the following results on the evaluation set:
32
+ - Loss: 2.8236
33
+ - Rouge1: 27.0012
34
+ - Rouge2: 12.728
35
+ - Rougel: 19.8685
36
+ - Rougelsum: 25.0485
37
+ - Gen Len: 59.969
38
+
39
+ ## Model description
40
+
41
+ More information needed
42
+
43
+ ## Intended uses & limitations
44
+
45
+ More information needed
46
+
47
+ ## Training and evaluation data
48
+
49
+ More information needed
50
+
51
+ ## Training procedure
52
+
53
+ ### Training hyperparameters
54
+
55
+ The following hyperparameters were used during training:
56
+ - learning_rate: 2e-05
57
+ - train_batch_size: 2
58
+ - eval_batch_size: 2
59
+ - seed: 42
60
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
+ - lr_scheduler_type: linear
62
+ - num_epochs: 5
63
+ - mixed_precision_training: Native AMP
64
+
65
+ ### Training results
66
+
67
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
68
+ |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
69
+ | 3.3604 | 1.0 | 4000 | 3.1575 | 25.0078 | 11.5381 | 18.4246 | 23.1605 | 54.8935 |
70
+ | 3.0697 | 2.0 | 8000 | 2.9478 | 26.4947 | 12.5411 | 19.4328 | 24.6123 | 57.948 |
71
+ | 2.8638 | 3.0 | 12000 | 2.8672 | 26.8856 | 12.7568 | 19.8949 | 24.8745 | 59.6245 |
72
+ | 2.7243 | 4.0 | 16000 | 2.8347 | 26.7347 | 12.5152 | 19.6516 | 24.7756 | 60.439 |
73
+ | 2.6072 | 5.0 | 20000 | 2.8236 | 27.0012 | 12.728 | 19.8685 | 25.0485 | 59.969 |
74
+
75
+
76
+ ### Framework versions
77
+
78
+ - Transformers 4.17.0
79
+ - Pytorch 1.10.0+cu111
80
+ - Datasets 1.18.3
81
+ - Tokenizers 0.11.6