kdvisdjf rkeogkw
Update README.md
d140c47
|
raw
history blame
5.58 kB
metadata
datasets:
  - midas/krapivin
  - midas/inspec
language:
  - en
widget:
  - text: >-
      <|KEYPHRASES|> In this paper, we investigate cross-domain limitations of
      keyphrase generation using the models for abstractive text summarization.
      We present an evaluation of BART fine-tuned for keyphrase generation
      across three types of texts, namely scientific texts from computer science
      and biomedical domains and news texts. We explore the role of transfer
      learning between different domains to improve the model performance on
      small text corpora.
  - text: >-
      <|TITLE|> In this paper, we investigate cross-domain limitations of
      keyphrase generation using the models for abstractive text summarization.
      We present an evaluation of BART fine-tuned for keyphrase generation
      across three types of texts, namely scientific texts from computer science
      and biomedical domains and news texts. We explore the role of transfer
      learning between different domains to improve the model performance on
      small text corpora.
  - text: >-
      <|KEYPHRASES|> Relevance has traditionally been linked with feature subset
      selection, but formalization of this link has not been attempted. In this
      paper, we propose two axioms for feature subset selection sufficiency
      axiom and necessity axiombased on which this link is formalized: The
      expected feature subset is the one which maximizes relevance. Finding the
      expected feature subset turns out to be NP-hard. We then devise a
      heuristic algorithm to find the expected subset which has a polynomial
      time complexity. The experimental results show that the algorithm finds
      good enough subset of features which, when presented to C4.5, results in
      better prediction accuracy.
  - text: >-
      <|TITLE|> Relevance has traditionally been linked with feature subset
      selection, but formalization of this link has not been attempted. In this
      paper, we propose two axioms for feature subset selection sufficiency
      axiom and necessity axiombased on which this link is formalized: The
      expected feature subset is the one which maximizes relevance. Finding the
      expected feature subset turns out to be NP-hard. We then devise a
      heuristic algorithm to find the expected subset which has a polynomial
      time complexity. The experimental results show that the algorithm finds
      good enough subset of features which, when presented to C4.5, results in
      better prediction accuracy.
library_name: transformers

BART fine-tuned for keyphrase generation

This is the bart-base (Lewis et al.. 2019) model finetuned for generating titles and keyphrases for scientific texts on the following corpora:

Inspired by (Cachola et al., 2020), we applied control codes to fine-tune BART in a multi-task manner. First, we create a training set containing comma-separated lists of keyphrases and titles as text generation targets. For this purpose, we form text-title and text-keyphrases pairs based on the original text corpus. Second, we append each source text in the training set with control codes <|TITLE|> and <|KEYPHRASES|> respectively. After that, the training set is shuffled in random order. Finally, the preprocessed training set is utilized to fine-tune the pre-trained BART model

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("beogradjanka/bart_multitask_finetuned_for_title_and_keyphrase_generation")
model = AutoModelForSeq2SeqLM.from_pretrained("beogradjanka/bart_multitask_finetuned_for_title_and_keyphrase_generation")


text = "In this paper, we investigate cross-domain limitations of keyphrase generation using the models for abstractive text summarization.\
        We present an evaluation of BART fine-tuned for keyphrase generation across three types of texts, \
        namely scientific texts from computer science and biomedical domains and news texts. \
        We explore the role of transfer learning between different domains to improve the model performance on small text corpora."

#generating keyphrases
tokenized_text = tokenizer.prepare_seq2seq_batch(["<|KEYPHRASES|> " + text], return_tensors='pt')
translation = model.generate(**tokenized_text)
translated_text = tokenizer.batch_decode(translation, skip_special_tokens=True)[0]
print(translated_text)

#generating title
tokenized_text = tokenizer.prepare_seq2seq_batch(["<|TITLE|> " + text], return_tensors='pt')
translation = model.generate(**tokenized_text)
translated_text = tokenizer.batch_decode(translation, skip_special_tokens=True)[0]
print(translated_text)

Training Hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 4e-5
  • train_batch_size: 8
  • optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08
  • num_epochs: 3

BibTeX:

@article{glazkova2022applying,
  title={Applying transformer-based text summarization for keyphrase generation},
  author={Glazkova, Anna and Morozov, Dmitry},
  journal={arXiv preprint arXiv:2209.03791},
  year={2022}
}