samaksh-khatri-crest-data
commited on
Update mode card to add testing results
Browse files
README.md
CHANGED
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
|
|
16 |
|
17 |
# distilbert-base-uncased-finetuned-sst-2-english_07112024T125645
|
18 |
|
19 |
-
This model is a fine-tuned version of [distilbert/distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert/distilbert-base-uncased-finetuned-sst-2-english) on the
|
20 |
It achieves the following results on the evaluation set:
|
21 |
- Loss: 0.5776
|
22 |
- F1: 0.8426
|
@@ -76,7 +76,21 @@ The following hyperparameters were used during training:
|
|
76 |
| 0.0376 | 19.0 | 2679 | 0.8233 | 0.8629 | 1e-07 |
|
77 |
| 0.0376 | 20.0 | 2820 | 0.8235 | 0.8632 | 0.0 |
|
78 |
|
79 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
80 |
### Framework versions
|
81 |
|
82 |
- Transformers 4.44.2
|
|
|
16 |
|
17 |
# distilbert-base-uncased-finetuned-sst-2-english_07112024T125645
|
18 |
|
19 |
+
This model is a fine-tuned version of [distilbert/distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert/distilbert-base-uncased-finetuned-sst-2-english) on the MR Analysis Phase-3 dataset.
|
20 |
It achieves the following results on the evaluation set:
|
21 |
- Loss: 0.5776
|
22 |
- F1: 0.8426
|
|
|
76 |
| 0.0376 | 19.0 | 2679 | 0.8233 | 0.8629 | 1e-07 |
|
77 |
| 0.0376 | 20.0 | 2820 | 0.8235 | 0.8632 | 0.0 |
|
78 |
|
79 |
+
### Testing Results
|
80 |
+
|
81 |
+
| class | precision | recall | f1-score |
|
82 |
+
|:------------------------:|:---------:|:------:|:--------:|
|
83 |
+
| change_request | 0.918 | 0.651 | 0.762 |
|
84 |
+
| discussion_participation | 0.839 | 0.882 | 0.860 |
|
85 |
+
| discussion_trigger | 0.879 | 0.902 | 0.890 |
|
86 |
+
| acknowledgement | 0.847 | 0.920 | 0.882 |
|
87 |
+
| critical | 0.686 | 0.940 | 0.793 |
|
88 |
+
| reference | 0.802 | 0.947 | 0.869 |
|
89 |
+
| ----------- | ---------- | --------| --------- |
|
90 |
+
| **accuracy** | | | 0.828 |
|
91 |
+
| **macro avg** | 0.828 | 0.874 | 0.843 |
|
92 |
+
| **weighted avg** | 0.845 | 0.828 | 0.825 |
|
93 |
+
|
94 |
### Framework versions
|
95 |
|
96 |
- Transformers 4.44.2
|