|
--- |
|
language: |
|
- de |
|
--- |
|
|
|
# HisGermaNER: NER Datasets for Historical German |
|
|
|
<img src="https://huggingface.co/datasets/stefan-it/HisGermaNER/resolve/main/assets/logo.jpeg" width="500" height="500" /> |
|
|
|
In this repository we release another NER dataset from historical German newspapers. |
|
|
|
## Newspaper corpus |
|
|
|
In the first release of our dataset, we select 11 newspapers from 1710 to 1840 from the Austrian National Library (ONB), resulting in 100 pages: |
|
|
|
| Year | ONB ID | Newspaper | URL | Pages | |
|
| ---- | ------------------ | -------------------------------- | ------------------------------------------------------------------------ | ----- | |
|
| 1720 | `ONB_wrz_17200511` | Wiener Zeitung | [Viewer](https://anno.onb.ac.at/cgi-content/anno?aid=wrz&datum=17200511) | 10 | |
|
| 1730 | `ONB_wrz_17300603` | Wiener Zeitung | [Viewer](https://anno.onb.ac.at/cgi-content/anno?aid=wrz&datum=17300603) | 14 | |
|
| 1740 | `ONB_wrz_17401109` | Wiener Zeitung | [Viewer](https://anno.onb.ac.at/cgi-content/anno?aid=wrz&datum=17401109) | 12 | |
|
| 1770 | `ONB_rpr_17700517` | Reichspostreuter | [Viewer](https://anno.onb.ac.at/cgi-content/anno?aid=rpr&datum=17700517) | 4 | |
|
| 1780 | `ONB_wrz_17800701` | Wiener Zeitung | [Viewer](https://anno.onb.ac.at/cgi-content/anno?aid=wrz&datum=17800701) | 24 | |
|
| 1790 | `ONB_pre_17901030` | Preßburger Zeitung | [Viewer](https://anno.onb.ac.at/cgi-content/anno?aid=pre&datum=17901030) | 12 | |
|
| 1800 | `ONB_ibs_18000322` | Intelligenzblatt von Salzburg | [Viewer](https://anno.onb.ac.at/cgi-content/anno?aid=ibs&datum=18000322) | 8 | |
|
| 1810 | `ONB_mgs_18100508` | Morgenblatt für gebildete Stände | [Viewer](https://anno.onb.ac.at/cgi-content/anno?aid=mgs&datum=18100508) | 4 | |
|
| 1820 | `ONB_wan_18200824` | Der Wanderer | [Viewer](https://anno.onb.ac.at/cgi-content/anno?aid=wan&datum=18200824) | 4 | |
|
| 1830 | `ONB_ild_18300713` | Das Inland | [Viewer](https://anno.onb.ac.at/cgi-content/anno?aid=ild&datum=18300713) | 4 | |
|
| 1840 | `ONB_hum_18400625` | Der Humorist | [Viewer](https://anno.onb.ac.at/cgi-content/anno?aid=hum&datum=18400625) | 4 | |
|
|
|
## Data Workflow |
|
|
|
In the first step, we obtain original scans from ONB for our selected newspapers. In the second step, we perform OCR using [Transkribus](https://readcoop.eu/de/transkribus/). |
|
|
|
We use the [Transkribus print M1](https://readcoop.eu/model/transkribus-print-multi-language-dutch-german-english-finnish-french-swedish-etc/) model for performing OCR. |
|
Note: we experimented with an existing NewsEye model, but the print M1 model is newer and led to better performance in our preliminary experiments. |
|
|
|
Only layout hints/fixes were made in Transkribus. So no OCR corrections or normalizations were performed. |
|
|
|
<img src="https://huggingface.co/datasets/stefan-it/HisGermaNER/resolve/main/assets/transkribus_wrz_17401109.png" width="500" height="500" /> |
|
|
|
We export plain text of all newspaper pages into plain text format and perform normalization of hyphenation and the `=` character. |
|
After normalization we tokenize the plain text newspaper pages using the `PreTokenizer` of the [hmBERT](https://huggingface.co/hmbert) model. |
|
|
|
After pre-tokenization we import the corpus into Argilla to start the annotation of named entities. |
|
Note: We perform annotation at page/document-level. Thus, no sentence segmentation is needed and performed. |
|
In the annotation process we also manually annotate sentence boundaries using a special `EOS` tag. |
|
|
|
<img src="https://huggingface.co/datasets/stefan-it/HisGermaNER/resolve/main/assets/argilla_wrz_17401109.png" width="600" height="600" /> |
|
|
|
The dataset is exported into an CoNLL-like format after the annotation process. |
|
The `EOS` tag is removed and the information of an potential end of sentence is stored in a special column. |
|
|
|
## Annotation Guidelines |
|
|
|
We use the same NE's (`PER`, `LOC` and `ORG`) and annotation guideline as used in the awesome [Europeana NER Corpora](https://github.com/cneud/ner-corpora). |
|
|
|
Furthermore, we introduced some specific rules for annotations: |
|
|
|
* `PER`: We include e.g. `Kaiser`, `Lord`, `Cardinal` or `Graf` in the NE, but not `Herr`, `Fräulein`, `General` or rank/grades. |
|
* `LOC`: We excluded `Königreich` from the NE. |
|
|
|
## Dataset Format |
|
|
|
Our dataset format is inspired by the [HIPE-2022 Shared Task](https://github.com/hipe-eval/HIPE-2022-data?tab=readme-ov-file#hipe-format-and-tagging-scheme). |
|
Here's an example of an annotated document: |
|
|
|
```txt |
|
TOKEN NE-COARSE-LIT MISC |
|
|
|
-DOCSTART- O _ |
|
|
|
# onb:id = ONB_wrz_17800701 |
|
# onb:image_link = https://anno.onb.ac.at/cgi-content/anno?aid=wrz&datum=17800701&seite=12 |
|
# onb:page_nr = 12 |
|
# onb:publication_year_str = 17800701 |
|
den O _ |
|
Pöbel O _ |
|
noch O _ |
|
mehr O _ |
|
in O _ |
|
Harnisch O _ |
|
. O EndOfSentence |
|
Sie O _ |
|
legten O _ |
|
sogleich O _ |
|
``` |
|
|
|
Note: we include a `-DOCSTART-` marker to e.g. allow document-level features for NER as proposed in the [FLERT](https://arxiv.org/abs/2011.06993) paper. |
|
|
|
## Dataset Splits & Stats |
|
|
|
For training powerful NER models on the dataset, we manually document-splitted the dataset into training, development and test splits. |
|
|
|
The training split consists of 73 documents, development split of 13 documents and test split of 14 documents. |
|
|
|
We perform dehyphenation as one and only preprocessing step. The final dataset splits can be found in the `splits` folder of this dataset repository. |
|
|
|
Some dataset statistics - instances per class: |
|
|
|
| Class | Training | Development | Test | |
|
| ----- | -------- | ----------- | ---- | |
|
| `PER` | 942 | 308 | 238 | |
|
| `LOC` | 749 | 217 | 216 | |
|
| `ORG` | 16 | 3 | 11 | |
|
|
|
Number of sentences (incl. document marker) per split: |
|
|
|
| | Training | Development | Test | |
|
| --------- | -------- | ----------- | ---- | |
|
| Sentences | 1.539 | 406 | 400 | |
|
|
|
# Release Cycles |
|
|
|
We plan to release new updated versions of this dataset on a regular basis (e.g. monthly). |
|
For now, we want to collect some feedback about the dataset first, so we use `v0` as current version. |
|
|
|
# Questions & Feedback |
|
|
|
Please open a new discussion [here](https://huggingface.co/datasets/stefan-it/HisGermaNER/discussions) for questions or feedback! |
|
|
|
# License |
|
|
|
Dataset is (currently) licenced under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). |
|
|