MoritzLaurer HF staff commited on
Commit
cc70dfa
1 Parent(s): 03d5466

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -29,8 +29,9 @@ The model was trained on a mixture of 27 tasks and 310 classes that have been re
29
  'wikitoxic_toxicaggregated', 'wikitoxic_obscene', 'wikitoxic_threat', 'wikitoxic_insult', 'wikitoxic_identityhate',
30
  'hateoffensive', 'hatexplain', 'biasframes_offensive', 'biasframes_sex', 'biasframes_intent',
31
  'agnews', 'yahootopics',
32
- 'trueteacher', 'spam', 'wellformedquery'
33
- 2. Five NLI datasets with ~885k texts: "mnli", "anli", "fever", "wanli", "ling"
 
34
 
35
  Note that compared to other NLI models, this model predicts two classes (`entailment` vs. `not_entailment`)
36
  as opposed to three classes (entailment/neutral/contradiction)
@@ -59,8 +60,8 @@ Please consult the original DeBERTa paper and the papers for the different datas
59
  The base model (DeBERTa-v3) is published under the MIT license.
60
  The datasets the model was fine-tuned on are published under a diverse set of licenses.
61
  The following spreadsheet provides an overview of the non-NLI datasets used for fine-tuning.
62
- The spreadsheets contains information on licenses, the underlying papers etc.
63
- https://docs.google.com/spreadsheets/d/1Z18tMh02IiWgh6o8pfoMiI_LH4IXpr78wd_nmNd5FaE/edit?usp=sharing
64
  In addition, the model was also trained on the following NLI datasets: MNLI, ANLI, WANLI, LING-NLI, FEVER-NLI.
65
 
66
  ## Citation
 
29
  'wikitoxic_toxicaggregated', 'wikitoxic_obscene', 'wikitoxic_threat', 'wikitoxic_insult', 'wikitoxic_identityhate',
30
  'hateoffensive', 'hatexplain', 'biasframes_offensive', 'biasframes_sex', 'biasframes_intent',
31
  'agnews', 'yahootopics',
32
+ 'trueteacher', 'spam', 'wellformedquery'.
33
+ See details on each dataset here: https://docs.google.com/spreadsheets/d/1Z18tMh02IiWgh6o8pfoMiI_LH4IXpr78wd_nmNd5FaE/edit?usp=sharing
34
+ 3. Five NLI datasets with ~885k texts: "mnli", "anli", "fever", "wanli", "ling"
35
 
36
  Note that compared to other NLI models, this model predicts two classes (`entailment` vs. `not_entailment`)
37
  as opposed to three classes (entailment/neutral/contradiction)
 
60
  The base model (DeBERTa-v3) is published under the MIT license.
61
  The datasets the model was fine-tuned on are published under a diverse set of licenses.
62
  The following spreadsheet provides an overview of the non-NLI datasets used for fine-tuning.
63
+ The spreadsheets contains information on licenses, the underlying papers etc.: https://docs.google.com/spreadsheets/d/1Z18tMh02IiWgh6o8pfoMiI_LH4IXpr78wd_nmNd5FaE/edit?usp=sharing
64
+
65
  In addition, the model was also trained on the following NLI datasets: MNLI, ANLI, WANLI, LING-NLI, FEVER-NLI.
66
 
67
  ## Citation