metadata
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
- split: test
path: data/test-*
- split: gen
path: data/gen-*
- split: train_100
path: data/train_100-*
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: domain
dtype: string
splits:
- name: train
num_bytes: 4363969
num_examples: 24155
- name: dev
num_bytes: 549121
num_examples: 3000
- name: test
num_bytes: 548111
num_examples: 3000
- name: gen
num_bytes: 5721102
num_examples: 21000
- name: train_100
num_bytes: 5592847
num_examples: 39500
download_size: 5220150
dataset_size: 16775150
Dataset Card for "COGS"
It contains the dataset used in the paper COGS: A Compositional Generalization Challenge Based on Semantic Interpretation.
It has four splits, where gen refers to the generalization split and train_100 refers to the training version with 100 primitive exposure examples.
You can use it by calling:
train_data = datasets.load_dataset("Punchwe/COGS", split="train")
train100_data = datasets.load_dataset("Punchwe/COGS", split="train_100")
gen_data = datasets.load_dataset("Punchwe/COGS", split="gen")