File size: 2,887 Bytes
bdbc022
 
73f843f
 
 
 
 
 
ff058a2
73f843f
 
bdbc022
ff058a2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
---
license: mit
text_categories:
- lexical normalization
language:
- en
pretty_name: MaintNorm
size_categories:
- 10K<n<100K
multilingualism:
- monolingual
---

# MaintNorm Dataset Card

## Overview
The MaintNorm dataset is a collection of 12,000 English language texts, specifically focusing on short texts extracted from maintenance work orders from three major mining organisations in Australia. This dataset is annotated for both lexical normalization and token-level entity tagging tasks, making it a valuable resource for natural language processing research and applications in industrial contexts.

For further information about the annotation process and dataset characteristics, refer to the [MaintNorm paper](https://aclanthology.org/2024.wnut-1.7/) or vitit the [GitHub repository](https://github.com/nlp-tlp/maintnorm)

## Dataset Structure
This dataset includes data from three distinct company-specific sources (`company_a`, `company_b`, `company_c`), along with a `combined` dataset that integrates data across these sources. This structure supports both granular and comprehensive analyses.

## Masking Scheme

To address privacy and data specificity, the following token-level entity tags are used:
- `<id>`: Asset identifiers, for example, _ENG001_, _rd1286_
- `<sensitive>`: Sensitive information specific to organisations, including proprietary systems, third-party contractors, and names of personnel.
- `<num>`: Numerical entities, such as _8_, _7001223_
- `<date>`: Representations of dates, either in numerical form like _10/10/2023_ or phrase form such as _8th Dec_

## Dataset Instances


The dataset adopts a standard normalisation format similar to that used in the WNUT shared tasks, with each text resembling the format seen in CoNLL03: tokens are separated by newlines, and each token is accompanied by its normalised or masked counterpart, separated by a tab.

### Examples

```txt
Exhaust	exhaust
Fan	fan
#6	number <num>
Tripping	tripping
c/b	circuit breaker

HF338	<id>
INVESTAGATE	investigate
24V	<num> V
FAULT	fault
```

## Citation

Please cite the following paper if you use this dataset in your research:

```
@inproceedings{bikaun-etal-2024-maintnorm,
    title = "{M}aint{N}orm: A corpus and benchmark model for lexical normalisation and masking of industrial maintenance short text",
    author = "Bikaun, Tyler  and
      Hodkiewicz, Melinda  and
      Liu, Wei",
    editor = {van der Goot, Rob  and
      Bak, JinYeong  and
      M{\"u}ller-Eberstein, Max  and
      Xu, Wei  and
      Ritter, Alan  and
      Baldwin, Tim},
    booktitle = "Proceedings of the Ninth Workshop on Noisy and User-generated Text (W-NUT 2024)",
    month = mar,
    year = "2024",
    address = "San {\.G}iljan, Malta",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.wnut-1.7",
    pages = "68--78",
}

```