Datasets:
license: mit
language:
- en
paperswithcode_id: embedding-data/sentence-compression
pretty_name: sentence-compression
task_categories:
- sentence-similarity
- paraphrase-mining
task_ids:
- semantic-similarity-classification
Dataset Card for "sentence-compression"
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: https://github.com/google-research-datasets/sentence-compression
- Repository: More Information Needed
- Paper: More Information Needed
- Point of Contact: Katja Filippova
- Size of downloaded dataset files:
- Size of the generated dataset:
- Total amount of disk used: 14.2 MB
Dataset Summary
Dataset with pairs of equivalent sentences. The dataset is provided "AS IS" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from using the dataset.
Disclaimer: The team releasing sentence-compression did not upload the dataset to the Hub and did not write a dataset card. These steps were done by the Hugging Face team.
Supported Tasks
- Sentence Transformers training; useful for semantic search and sentence similarity.
Languages
- English.
Dataset Structure
Each example in the dataset contains pairs of equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value".
{"set": [sentence_1, sentence_2]}
{"set": [sentence_1, sentence_2]}
...
{"set": [sentence_1, sentence_2]}
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar pairs of sentences.
Usage Example
Install the 🤗 Datasets library with pip install datasets
and load the dataset from the Hub with:
from datasets import load_dataset
dataset = load_dataset("embedding-data/sentence-compression")
The dataset is loaded as a DatasetDict
and has the format:
DatasetDict({
train: Dataset({
features: ['set'],
num_rows: 180000
})
})
Review an example i
with:
dataset["train"][i]["set"]