ITER: Iterative Transformer-based Entity Recognition and Relation Extraction

This model checkpoint is part of the collection of models published alongside our paper ITER, accepted at EMNLP 2024.
To ease reproducibility and enable open research, our source code has been published on GitHub.

This model achieved an F1 score of 92.060 on dataset conll03

Using ITER in your code

First, install ITER in your preferred environment:

pip install git+https://github.com/fleonce/iter

To use our model, refer to the following code:

from iter import ITER

model = ITER.from_pretrained("fleonce/iter-conll03-deberta-large")
tokenizer = model.tokenizer

encodings = tokenizer(
  "An art exhibit at the Hakawati Theatre in Arab east Jerusalem was a series of portraits of Palestinians killed in the rebellion .",
  return_tensors="pt"
)

generation_output = model.generate(
    encodings["input_ids"],
    attention_mask=encodings["attention_mask"],
)

# entities
print(generation_output.entities)

# relations between entities
print(generation_output.links)

Checkpoints

We publish checkpoints for the models performing best on the following datasets:

Reproducibility

For each dataset, we selected the best performing checkpoint out of the 5 training runs we performed during training. This model was trained with the following hyperparameters:

  • Seed: 2
  • Config: conll03/small_lr
  • PyTorch 2.3.0 with CUDA 11.8 and precision torch.bfloat16
  • GPU: 1 NVIDIA H100 SXM 80 GB GPU

Varying GPU and CUDA version as well as training precision did result in slightly different end results in our tests for reproducibility.

To train this model, refer to the following command:

python3 train.py --dataset conll03/small_lr --transformer microsoft/deberta-v3-large --use_bfloat16 --seed 2
@inproceedings{hennen-etal-2024-iter,
    title = "{ITER}: Iterative Transformer-based Entity Recognition and Relation Extraction",
    author = "Hennen, Moritz  and
      Babl, Florian  and
      Geierhos, Michaela",
    editor = "Al-Onaizan, Yaser  and
      Bansal, Mohit  and
      Chen, Yun-Nung",
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2024",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.findings-emnlp.655",
    doi = "10.18653/v1/2024.findings-emnlp.655",
    pages = "11209--11223",
    abstract = "When extracting structured information from text, recognizing entities and extracting relationships are essential. Recent advances in both tasks generate a structured representation of the information in an autoregressive manner, a time-consuming and computationally expensive approach. This naturally raises the question of whether autoregressive methods are necessary in order to achieve comparable results. In this work, we propose ITER, an efficient encoder-based relation extraction model, that performs the task in three parallelizable steps, greatly accelerating a recent language modeling approach: ITER achieves an inference throughput of over 600 samples per second for a large model on a single consumer-grade GPU. Furthermore, we achieve state-of-the-art results on the relation extraction datasets ADE and ACE05, and demonstrate competitive performance for both named entity recognition with GENIA and CoNLL03, and for relation extraction with SciERC and CoNLL04.",
}
Downloads last month
11
Safetensors
Model size
443M params
Tensor type
F32
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for fleonce/iter-conll03-deberta-large

Finetuned
(123)
this model

Evaluation results