Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
csv
Sub-tasks:
text-scoring
Languages:
English
Size:
10K - 100K
License:
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,62 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
annotations_creators:
|
3 |
+
- crowdsourced
|
4 |
+
language_creators:
|
5 |
+
- found
|
6 |
+
languages:
|
7 |
+
- en-US
|
8 |
+
licenses:
|
9 |
+
- cc-by-3.0
|
10 |
+
multilinguality:
|
11 |
+
- monolingual
|
12 |
+
pretty_name: 'Sentence-level formality annotations for news, blogs, email and QA forums.
|
13 |
+
|
14 |
+
|
15 |
+
Published in "An Empirical Analysis of Formality in Online Communication" (Pavlick
|
16 |
+
and Tetreault, 2016) '
|
17 |
+
size_categories:
|
18 |
+
- 10K<n<100K
|
19 |
+
source_datasets:
|
20 |
+
- original
|
21 |
+
task_categories:
|
22 |
+
- text-classification
|
23 |
+
task_ids:
|
24 |
+
- text-scoring
|
25 |
---
|
26 |
+
|
27 |
+
|
28 |
+
This dataset contains sentence-level formality annotations used in the 2016 TACL paper "An Empirical Analysis of Formality in Online Communication" (Pavlick
|
29 |
+
and Tetreault, 2016). It includes sentences from four genres (news, blogs, email, and QA forums), all annotated by humans on Amazon Mechanical Turk. The news and blog data was collected by Shibamouli Lahiri, and we are redistributing it here for the convenience of other researchers. We collected the email and answers data ourselves, using a similar annotation setup to Shibamouli. If you use this data in your work, please cite BOTH of the below papers:
|
30 |
+
|
31 |
+
```
|
32 |
+
@article{PavlickAndTetreault-2016:TACL,
|
33 |
+
author = {Ellie Pavlick and Joel Tetreault},
|
34 |
+
title = {An Empirical Analysis of Formality in Online Communication},
|
35 |
+
journal = {Transactions of the Association for Computational Linguistics},
|
36 |
+
year = {2016},
|
37 |
+
publisher = {Association for Computational Linguistics}
|
38 |
+
}
|
39 |
+
|
40 |
+
@article{Lahiri-2015:arXiv,
|
41 |
+
title={{SQUINKY! A} Corpus of Sentence-level Formality, Informativeness, and Implicature},
|
42 |
+
author={Lahiri, Shibamouli},
|
43 |
+
journal={arXiv preprint arXiv:1506.02306},
|
44 |
+
year={2015}
|
45 |
+
}
|
46 |
+
```
|
47 |
+
|
48 |
+
## Contents
|
49 |
+
|
50 |
+
The annotated data files and number of lines in each are as follows:
|
51 |
+
|
52 |
+
* 4977 answers -- Annotated sentences from a random sample of posts from the Yahoo! Answers forums: https://answers.yahoo.com/
|
53 |
+
* 1821 blog -- Annotated sentences from the top 100 blogs listed on http://technorati.com/ on October 31, 2009.
|
54 |
+
* 1701 email -- Annotated sentences from a random sample of emails from the Jeb Bush email archive: http://americanbridgepac.org/jeb-bushs-gubernatorial-email-archive/
|
55 |
+
* 2775 news -- Annotated sentences from the "breaking", "recent", and "local" news sections of the following 20 news sites: CNN, CBS News, ABC News, Reuters, BBC News Online, New York Times, Los Angeles Times, The Guardian (U.K.), Voice of America, Boston Globe, Chicago Tribune, San Francisco Chronicle, Times Online (U.K.), news.com.au, Xinhua, The Times of India, Seattle Post Intelligencer, Daily Mail, and Bloomberg L.P.
|
56 |
+
|
57 |
+
## Format
|
58 |
+
|
59 |
+
Each record contains the following fields:
|
60 |
+
|
61 |
+
1. `avg_score`: the mean formality rating, which ranges from -3 to 3 where lower scores indicate less formal sentences
|
62 |
+
2. `sentence`
|