Datasets:
docs: update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
1 |
## Overview
|
2 |
TurtleBench is a novel evaluation benchmark designed to assess the reasoning capabilities of large language models (LLMs) using yes/no puzzles (commonly known as "Turtle Soup puzzles"). This dataset is constructed based on user guesses collected from our online Turtle Soup Puzzle platform, providing a dynamic and interactive means of evaluation. Unlike traditional static evaluation benchmarks, TurtleBench focuses on testing models in interactive settings to better capture their logical reasoning performance. The dataset contains real user guesses and annotated responses, enabling a fair and challenging evaluation for modern LLMs.
|
3 |
## Dataset Contents
|
|
|
1 |
+
---
|
2 |
+
pretty_name: TurtleBench
|
3 |
+
size_categories:
|
4 |
+
- 1K<n<10K
|
5 |
+
---
|
6 |
## Overview
|
7 |
TurtleBench is a novel evaluation benchmark designed to assess the reasoning capabilities of large language models (LLMs) using yes/no puzzles (commonly known as "Turtle Soup puzzles"). This dataset is constructed based on user guesses collected from our online Turtle Soup Puzzle platform, providing a dynamic and interactive means of evaluation. Unlike traditional static evaluation benchmarks, TurtleBench focuses on testing models in interactive settings to better capture their logical reasoning performance. The dataset contains real user guesses and annotated responses, enabling a fair and challenging evaluation for modern LLMs.
|
8 |
## Dataset Contents
|