Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
csv
Languages:
Dutch
Size:
10K - 100K
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,81 @@
|
|
1 |
---
|
2 |
license: mit
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: mit
|
3 |
+
task_categories:
|
4 |
+
- text-classification
|
5 |
+
language:
|
6 |
+
- nl
|
7 |
+
pretty_name: DBRD
|
8 |
---
|
9 |
+
configs:
|
10 |
+
- config_name: default
|
11 |
+
data_files:
|
12 |
+
- split: train
|
13 |
+
path: train/neg/*, train/pos/*
|
14 |
+
- split: test
|
15 |
+
path: test/neg/*, test/pos/*
|
16 |
+
|
17 |
+
dataset_info:
|
18 |
+
features:
|
19 |
+
- name: text
|
20 |
+
dtype: string
|
21 |
+
- name: label
|
22 |
+
dtype: integer (1 for positive, -1 for negative)
|
23 |
+
splits:
|
24 |
+
- name: train
|
25 |
+
num_examples: 20027
|
26 |
+
- name: test
|
27 |
+
num_examples: 2223
|
28 |
+
download_size: 79.1MB
|
29 |
+
dataset_size: 773,4MB
|
30 |
+
|
31 |
+
# Dataset Card for "DBRD: Dutch Book Reviews Dataset"
|
32 |
+
|
33 |
+
|
34 |
+
Translation of the [Dutch Book Review Dataset (DBRD)](https://github.com/benjaminvdb/DBRD), an extensive collection of over 110k book reviews with associated binary sentiment polarity labels. The dataset is designed for sentiment classification in Dutch and is influenced by the [Large Movie Review Dataset](http://ai.stanford.edu/~amaas/data/sentiment/).
|
35 |
+
|
36 |
+
The dataset and the scripts used for scraping the reviews from [Hebban](Hebban), a Dutch platform for book enthusiasts, can be found in the [DBRD GitHub repository](https://github.com/benjaminvdb/DBRD).
|
37 |
+
|
38 |
+
# Labels
|
39 |
+
|
40 |
+
Distribution of labels positive/negative/neutral in rounded percentages.
|
41 |
+
```
|
42 |
+
training: 50/50/ 0
|
43 |
+
test: 50/50/ 0
|
44 |
+
```
|
45 |
+
|
46 |
+
# Attribution
|
47 |
+
|
48 |
+
Please use the following citation when making use of this dataset in your work:
|
49 |
+
|
50 |
+
```citation
|
51 |
+
@article{DBLP:journals/corr/abs-1910-00896,
|
52 |
+
author = {Benjamin van der Burgh and
|
53 |
+
Suzan Verberne},
|
54 |
+
title = {The merits of Universal Language Model Fine-tuning for Small Datasets
|
55 |
+
- a case with Dutch book reviews},
|
56 |
+
journal = {CoRR},
|
57 |
+
volume = {abs/1910.00896},
|
58 |
+
year = {2019},
|
59 |
+
url = {http://arxiv.org/abs/1910.00896},
|
60 |
+
archivePrefix = {arXiv},
|
61 |
+
eprint = {1910.00896},
|
62 |
+
timestamp = {Fri, 04 Oct 2019 12:28:06 +0200},
|
63 |
+
biburl = {https://dblp.org/rec/journals/corr/abs-1910-00896.bib},
|
64 |
+
bibsource = {dblp computer science bibliography, https://dblp.org}
|
65 |
+
}
|
66 |
+
```
|
67 |
+
|
68 |
+
# Acknowledgements (as per GIT repository)
|
69 |
+
|
70 |
+
Please use the following citation when making use of this dataset in your work:
|
71 |
+
|
72 |
+
```citation
|
73 |
+
This dataset was created for testing out the ULMFiT (by Jeremy Howard and Sebastian Ruder) deep learning algorithm for text classification. It is implemented in the FastAI Python library that has taught me a lot. I'd also like to thank Timo Block for making his 10kGNAD dataset publicly available and giving me a starting point for this dataset. The dataset structure based on the Large Movie Review Dataset by Andrew L. Maas et al. Thanks to Andreas van Cranenburg for pointing out a problem with the dataset.
|
74 |
+
|
75 |
+
And of course I'd like to thank all the reviewers on Hebban for having taken the time to write all these reviews. You've made both book enthousiast and NLP researchers very happy :)
|
76 |
+
}
|
77 |
+
```
|
78 |
+
|
79 |
+
---
|
80 |
+
license: mit
|
81 |
+
---
|