Datasets:
annotations_creators:
- expert-generated
language_creators:
- expert-generated
language:
- zh
- en
multilinguality:
- multilingual
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- question-answering
task_ids:
- multiple-choice-qa
- language-modeling
pretty_name: TurtleBench
license: apache-2.0
tags:
- turtlebench
- evaluation
arxiv: 2410.05262
configs:
- config_name: default
data_files:
- split: chinese
path: chinese/*.jsonl
- split: english
path: english/*.jsonl
Overview
TurtleBench is a novel evaluation benchmark designed to assess the reasoning capabilities of large language models (LLMs) using yes/no puzzles (commonly known as "Turtle Soup puzzles"). This dataset is constructed based on user guesses collected from our online Turtle Soup Puzzle platform, providing a dynamic and interactive means of evaluation. Unlike traditional static evaluation benchmarks, TurtleBench focuses on testing models in interactive settings to better capture their logical reasoning performance. The dataset contains real user guesses and annotated responses, enabling a fair and challenging evaluation for modern LLMs.
Dataset Contents
The dataset is organized into two main folders: english
and chinese
, corresponding to the bilingual nature of the TurtleBench benchmark. Each language folder contains:
Final Dataset:
zh_data-00000-of-00001.jsonl
(in thechinese
folder): The complete, finalized dataset in JSONL format.en_data-00000-of-00001.jsonl
(in theenglish
folder): The complete, finalized dataset in JSONL format.
Staging Data (in the
staging
subfolder of each language):cases.list
: A list of the Turtle Soup cases used in the dataset.stories.json
: JSON file containing the surface stories and their corresponding "bottom" stories, which provide the hidden context required to answer the puzzles.titles.txt
(in thechinese/staging
folder only): A list of titles for the stories.
Data Collection
The dataset contains 1,532 entries derived from over 26,000 user guesses made during the Turtle Soup Puzzle game. Users were tasked with making logical guesses based solely on the surface stories provided, while the correct answers were derived from the bottom stories. All user guesses were annotated as either "Correct" or "Incorrect" based on the reasoning context provided by the bottom story.
Key Features
- Dynamic Evaluation: TurtleBench allows for real-world evaluation by continuously collecting user interactions, which makes it more difficult for models to cheat by memorizing static questions and answers.
- Bilingual: The dataset includes data in both Chinese and English, ensuring a diverse assessment of LLM reasoning capabilities. The English dataset is derived from translations of the original Chinese dataset.
- Interactive Reasoning: The dataset is specifically curated to require logical inference, avoiding heavy reliance on background knowledge, and focusing instead on the model's ability to understand and derive conclusions from the given context.
Usage
We have released the evaluation code for TurtleBench on GitHub. If you would like to use this dataset for evaluation, please refer to the code available at the following GitHub link: https://github.com/mazzzystar/TurtleBench
Links
Demo: https://tanghenre.com
Code: https://github.com/mazzzystar/TurtleBench
arXiv: https://arxiv.org/abs/2410.05262
Citation
@article{TurtleBench,
title={TurtleBench: Evaluating Top Language Models via Real-World Yes/No Puzzles},
author={Qingchen Yu and Shichao Song and Ke Fang and Yunfeng Shi and Zifan Zheng and Hanyu Wang and Simin Niu and Zhiyu Li},
journal={arXiv preprint arXiv:2410.05262},
year={2024},
}