MuLMS / README.md
Timo Schrader
added dataset files
bc1c97d
---
license: cc-by-sa-4.0
language:
- en
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- fill-mask
- token-classification
- text-classification
task_ids:
- named-entity-recognition
- slot-filling
pretty_name: Multi-Layer Materials Science Corpus
configs:
- config_name: MuLMS_Corpus
data_files:
- split: train
path: MuLMS_Corpus/train-*
- split: validation
path: MuLMS_Corpus/validation-*
- split: test
path: MuLMS_Corpus/test-*
default: true
- config_name: NER_Dependencies
data_files:
- split: train
path: NER_Dependencies/train-*
- split: validation
path: NER_Dependencies/validation-*
- split: test
path: NER_Dependencies/test-*
dataset_info:
- config_name: MuLMS_Corpus
features:
- name: doc_id
dtype: string
- name: sentence
dtype: string
- name: tokens
sequence: string
- name: beginOffset
dtype: int32
- name: endOffset
dtype: int32
- name: AZ_labels
dtype: string
- name: Measurement_label
dtype: string
- name: NER_labels
sequence:
- name: text
dtype: string
- name: id
dtype: int32
- name: value
dtype: string
- name: begin
dtype: string
- name: end
dtype: string
- name: tokenIndices
sequence: int32
- name: NER_labels_BILOU
sequence: string
- name: relations
sequence:
- name: ne_id_gov
dtype: int32
- name: ne_id_dep
dtype: int32
- name: label
dtype: string
- name: docFileName
dtype: string
- name: data_split
dtype: string
- name: category
dtype: string
splits:
- name: train
num_bytes: 7319898
num_examples: 7538
- name: validation
num_bytes: 1499121
num_examples: 1532
- name: test
num_bytes: 1236358
num_examples: 1114
download_size: 2792635
dataset_size: 10055377
- config_name: NER_Dependencies
features:
- name: ID
dtype: int32
- name: sentence
dtype: string
- name: token_id
dtype: int32
- name: token_text
dtype: string
- name: NE_Dependencies
dtype: string
- name: data_split
dtype: string
splits:
- name: train
num_bytes: 50517495
num_examples: 216806
- name: validation
num_bytes: 9320669
num_examples: 42010
- name: test
num_bytes: 8450774
num_examples: 33921
download_size: 3139932
dataset_size: 68288938
---
# Dataset Card for MuLMS
<p>
<img src="teaser.png">
<em>Example annotation in the Multi-Layer Materials Science Corpus (image source: <a href="https://arxiv.org/abs/2310.15569"> MuLMS: A Multi-Layer Annotated Text Corpus for Information Extraction in the Materials Science Domain</a>)<em>
</p>
### Dataset Description
The Multi-Layer Materials Science corpus (MuLMS) consists of 50 documents (licensed CC BY) from the materials science domain, spanning across the following 7 subareas:
"Electrolysis", "Graphene", "Polymer Electrolyte Fuel Cell (PEMFC)", "Solid Oxide Fuel Cell (SOFC)", "Polymers", "Semiconductors" and "Steel".
It was exhaustively annotated by domain experts. There are annotations on sentence-level and token-level for the following NLP tasks:
- **Measurement Frames**: Measurement annotations are treated in a frame-like fashion, using the span type MEASUREMENT to mark the triggers (e.g.,
was measured, is plotted) that introduce the Measurement frame to the discourse. Deciding whether a sentence contains a measurement trigger is treated as a
sentence-level task, determining the span that triggers the measurement frame is treated as named entity recognition.
- **Named Entities**: There are 12 token-level named entities (+ Measurement trigger) available in MuLMS. Named entities can span across multiple tokens.
- **Relations**: MuLMS provides relations between pairs of entities. There are two types of relations: measurement-related relations and further relations.
The first type always starts at Measurement trigger spans, the scond type does not start at a specific Measurement annotation.
- **Argumentative Zones**: Each sentence in MuLMS is assigned a rhetorical function in the discourse (e.g., _Background_ or _Experiment_Preparation_). There are 12 argumentative
zones in MuLMS, which leads to a sentence-level classification task.
You can find all experiment code files and further information in the [MuLMS-AZ Repo](https://github.com/boschresearch/mulms-az-codi2023) and [MuLMS Repo](https://github.com/boschresearch/mulms-wiesp2023).
For dataset statistics, please refer to both papers listed below. There you can also find detailed explanation of all parts of MuLMS in very detail.
- **Curated by:** [Bosch Center for AI](https://www.bosch-ai.com/) and [Bosch Research](https://www.bosch.com/research/)
- **Funded by**: [Robert Bosch GmbH](https://www.bosch.de/)
- **Language(s) (NLP):** English
- **License:** [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode.txt)
## Dataset Details
MuLMS provides all annotated files in UIMA CAS XMI format that can be used with annotation tools that can read these files such as [INCEpTION](https://inception-project.github.io/).
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/boschresearch/mulms-az-codi2023, https://github.com/boschresearch/mulms-wiesp2023
- **Paper:** https://aclanthology.org/2023.codi-1.1/, https://arxiv.org/abs/2310.15569
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
This dataset aims at information extraction from materials science documents. It enables the training of (neural) classifiers that can be used for downstream tasks such as NER and relation extraction.
Please refer to both repos linked above for training BERT-like models on all NLP tasks provided in MuLMS.
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
MuLMS offers two configs: _MuLMS_Corpus_, which loads the entire MuLMS dataset, and _NER_Dependecies_, which loads only Named Entities in CONLL format in order
to train models in the _NER_as_dependency_parsing_ setting.
MuLMS is divided into three split: _train_, _validation_, and _test_. Furthermore, _train_ is divided into five sub-splits, namely _tune1_,...,_tune5_.
This allows for model training on four splits, early stopping on the fivth and remaining split, model picking on validation and evaluation only once on test.
HuggingFace datasets do not support these sub-splits, hence they must be loaded as _train_ and post-processed and filtered afterward in a custom dataset loader.
### Dataset Config _MuLMS_Corpus_
- `doc_id`: ID of the source document that can be used to lookup the metadata of the paper in [MuLMS_Corpus_Metadata.csv](MuLMS_Corpus_Metadata.csv).
- `sentence`: Each instance in the dataset corresponds to one sentence extracted from scientic papers. These sentences are listed in this field.
- `tokens`: Pre-tokenized sentences. Each instance is a list of tokens.
- `begin_offset`: Offset of the beginning of each sentence within the full text of the document.
- `end_offset`: Offset of the end of each sentence within the full text of the document.
- `AZ_labels`: The argumentative zone (= rhetorical function) of each sentence in the discourse of a materials science publication.
- `Measurement_label`: Labels each sentence whether it contains a measurement description, i.e., measurement frame evoking trigger word, or not.
- `NER_labels`: Contains lists with named entities (NEs) per instance. Every named entity uses one of n indices in these lists, i.e., every 0-th element belong to each other, ...
- `text`: List of tokens that are contained in the current sentence instance.
- `id`: Unique ID for each named entity
- `value`: The named entity class
- `begin`: Character offsets of the begin tokens of each NE
- `end`: Character offsets of the end tokens of each NE
- `tokenIndices`: Token index in the list of tokens
- `NER_labels_BILOU`: BILOU tag sequence per token in the sentence (B = begin, I = inside, L = last, O = none, U = unit).
- `relations`: Lists of relations between pair-wise entities. As with the named entities, each relation corresponds to the same index in all three lists (_ne_id_gov_, _ne_id_dep_, _label_)
- `ne_id_gov`: List of NE entity IDs that act as head of the relation
- `ne_id_dep`: List of NE entity IDs that are the tail of the relation
- `label`: Relation label between both entities
- `docFileName`: Name of the source document in the corpus
- `data_split`: Indicates the split which a document belongs to (tune1/2/3/4/5, dev, test)
- `category`: One of 7 materials science sub-domains in MuLMS (SOFC, graphene, electrolysis, PEMFC, )
### Dataset Config _NER_Dependencies_
Each instance in this config refers to one token and carries a copy of the entire sentence, i.e., for _n_ tokens in a sentence, the text of the sentence is given _n_ times.
- `index`: Unique instance ID for each token.
- `ID`: Sentence ID. As opposed to the other config, the sentences here are not sorted by document and provided in their full form for every token they belong to.
- `Sentence`: Sentence string
- `Token_ID`: Unique ID for each token within each sentence. ID is resetted for each new sentence.
- `Token`: Token string
- `NE_Dependencies`: The named entity tag of form _k:LABEL_ where _k_ refers to the ID of the begin token and _LABEL_ to the named entity. The entity ends at the token holding this
- label.
- `data_split`: Indicates the split which a document belongs to (tune1/2/3/4/5, dev, test)
### Labels
For the different layers, the following labels are available:
- **Measurement Frames**:
- `Measurement`
- `Qual_Measurement`
- **Named Entities**:
- `MAT`
- `NUM`
- `VALUE`
- `UNIT`
- `PROPERTY`
- `FORM`
- `MEASUREMENT` (measurement frame-evoking trigger)
- `CITE`
- `SAMPLE`
- `TECHNIQUE`
- `DEV`
- `RANGE`
- `INSTRUMENT`
- **Relations**:
- `hasForm`
- `measuresProperty`
- `usedAs`
- `propertyValue`
- `conditionProperty`
- `conditionSample`
- `conditionPropertyValue`
- `usesTechnique`
- `measuresPropertyValue`
- `usedTogether`
- `conditionEnv`
- `usedIn`
- `conditionInstrument`
- `takenFrom`
- `dopedBy`
- **Argumentative Zones**:
- `Motivation`
- `Background`
- `PriorWork`
- `Experiment`
- `Preparation`
- `Characterization`
- `Explanation`
- `Results`
- `Conclusion`
- `Heading`
- `Caption`
- `Metadata`
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
Keeping track of all relevant recent publications and experimental results for a research area is a challenging task. MuLMS addresses this problem by
providing a large set of annotated documents that allow for training models that can be used for automated information extraction and answering search queries
in materials science documents.
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
You can find all the details for every document in this corpus in [MuLMS_Corpus_Metadata.csv](MuLMS_Corpus_Metadata.csv).
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
You can find all the authors for every document in this corpus in [MuLMS_Corpus_Metadata.csv](MuLMS_Corpus_Metadata.csv).
#### Annotation process
The annotation process included guideline design in dedicated discussion sessions. Afterward, the text files were annotated
using [INCEpTION](https://inception-project.github.io/).
#### Who are the annotators?
The annotators worked collaboratively to annotate the dataset in the best possible way. All people in this project either have background in materials science or computer
science. This synergy enables to incorporate both views, the materials scientist view that has a deep knowledge about the topics themselves as well as the CS view that
always looks at processing text data automatically in a structured fashion.
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
This dataset does not contain any personal, sensitive or private data. MuLMS builds upon publicly available scientific publications and all authors are credited accordingly.
## Citation
If you use our software or dataset in your scientific work, please cite both papers:
**BibTeX:**
```
@misc{schrader2023mulms,
title={MuLMS: A Multi-Layer Annotated Text Corpus for Information Extraction in the Materials Science Domain},
author={Timo Pierre Schrader and Matteo Finco and Stefan Grünewald and Felix Hildebrand and Annemarie Friedrich},
year={2023},
eprint={2310.15569},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{schrader-etal-2023-mulms,
title = "{M}u{LMS}-{AZ}: An Argumentative Zoning Dataset for the Materials Science Domain",
author = {Schrader, Timo and
B{\"u}rkle, Teresa and
Henning, Sophie and
Tan, Sherry and
Finco, Matteo and
Gr{\"u}newald, Stefan and
Indrikova, Maira and
Hildebrand, Felix and
Friedrich, Annemarie},
booktitle = "Proceedings of the 4th Workshop on Computational Approaches to Discourse (CODI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.codi-1.1",
doi = "10.18653/v1/2023.codi-1.1",
pages = "1--15",
}
```
## Changes
Changes to the source code from the original repo are listed in the [CHANGELOG](CHANGELOG) file.
## Copyright
```
Experiment resources related to the MuLMS corpus.
Copyright (c) 2023 Robert Bosch GmbH
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published
by the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
```
## License
This software is open-sourced under the AGPL-3.0 license. See the
[LICENSE_CODE](LICENSE_CODE) file for details.
The MuLMS corpus is released under the [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/legalcode.txt) license. See the [LICENSE_CORPUS](LICENSE_CORPUS) file for details.
## Dataset Card Authors
- Timo Pierre Schrader (Bosch Center for AI, University of Augsburg)
- Matteo Finco (Bosch Research)
- Stefan Grünewald (Bosch Center for AI, University of Stuttgart)
- Felix Hildebrand (Bosch Research)
- Annemarie Friedrich (University of Augsburg)
## Dataset Card Contact
For all questions, please contact [Timo Schrader](mailto:[email protected]).