ConvMix / README.md
pchristm's picture
Update dataset description and summary
61ddb74
metadata
license: cc-by-4.0
task_categories:
  - question-answering
  - conversational
language:
  - en
tags:
  - complex
  - question answering
  - convQA
  - conversationalAI
  - conversational
  - QA
  - heterogeneous sources
pretty_name: ConvMix
size_categories:
  - 10K<n<100K
splits:
  - name: train
    num_examples: 8400
  - name: validation
    num_examples: 2800
  - name: test
    num_examples: 4800

Dataset Card for ConvMix

Dataset Description

Dataset Summary

We construct and release the first benchmark, ConvMix, for conversational question answering (ConvQA) over heterogeneous sources, comprising 3000 real-user conversations with 16000 questions, along with entity annotations, completed question utterances, and question paraphrases. The dataset naturally requires information from multiple sources for answering the individual questions in the conversations.

Dataset Creation

The ConvMix benchmark was created by real humans. We tried to ensure that the collected data is as natural as possible. Master crowdworkers on Amazon Mechanical Turk (AMT) selected an entity of interest in a specific domain, and then started issuing conversational questions on this entity, potentially drifting to other topics of interest throughout the course of the conversation. By letting users choose the entities themselves, we aimed to ensure that they are more interested into the topics the conversations are based on. After writing a question, users were asked to find the answer in eithers Wikidata, Wikipedia text, a Wikipedia table or a Wikipedia infobox, whatever they find more natural for the specific question at hand. Since Wikidata requires some basic understanding of knowledge bases, we provided video guidelines that illustrated how Wikidata can be used for detecting answers, following an example conversation. For each conversational question, that might be incomplete, the crowdworker provides a completed question that is intent-explicit, and can be answered without the conversational context. These questions constitute the CompMix dataset. We provide also the answer source the user found the answer in and question entities.