|
--- |
|
license: cc-by-4.0 |
|
dataset_info: null |
|
configs: |
|
- config_name: CulturalBench-Hard |
|
default: true |
|
data_files: |
|
- split: test |
|
path: CulturalBench-Hard.csv |
|
- config_name: CulturalBench-Easy |
|
data_files: |
|
- split: test |
|
path: CulturalBench-Easy.csv |
|
size_categories: |
|
- 1K<n<10K |
|
pretty_name: CulturalBench |
|
--- |
|
# CulturalBench - a Robust, Diverse and Challenging Benchmark on Measuring the (Lack of) Cultural Knowledge of LLMs |
|
## **π Resources:** [Paper](https://arxiv.org/pdf/2410.02677) | [Leaderboard](https://huggingface.co/spaces/kellycyy/CulturalBench) |
|
|
|
## π Description of CulturalBench |
|
- CulturalBench is a set of 1,227 human-written and human-verified questions for effectively assessing LLMsβ cultural knowledge, covering 45 global regions including the underrepresented ones like Bangladesh, Zimbabwe, and Peru. |
|
- We evaluate models on two setups: CulturalBench-Easy and CulturalBench-Hard which share the same questions but asked differently. |
|
1. CulturalBench-Easy: multiple-choice questions (Output: one out of four options i.e. A,B,C,D). Evaluate model accuracy at question level (i.e. per `question_idx`). There are 1,227 questions in total. |
|
2. CulturalBench-Hard: binary (Output: one out of two possibilties i.e. True/False). Evaluate model accuracy at question level (i.e. per `question_idx`). There are 1,227x4=4908 binary judgements in total with 1,227 questions provided. |
|
|
|
- See details on CulturalBench paper at [https://arxiv.org/pdf/2410.02677](https://arxiv.org/pdf/2410.02677). |
|
|
|
|
|
### π Country distribution |
|
| Continent | Num of questions | Included Country/Region | |
|
|-----------------------|------------------|----------------------------------------------------------------| |
|
| North America | 27 | Canada; United States | |
|
| South America | 150 | Argentina; Brazil; Chile; Mexico; Peru | |
|
| East Europe | 115 | Czech Republic; Poland; Romania; Ukraine; Russia | |
|
| South Europe | 76 | Spain; Italy | |
|
| West Europe | 96 | France; Germany; Netherlands; United Kingdom | |
|
| Africa | 134 | Egypt; Morocco; Nigeria; South Africa; Zimbabwe | |
|
| Middle East/West Asia | 127 | Iran; Israel; Lebanon; Saudi Arabia; Turkey | |
|
| South Asia | 106 | Bangladesh; India; Nepal; Pakistan | |
|
| Southeast Asia | 159 | Indonesia; Malaysia; Philippines; Singapore; Thailand; Vietnam | |
|
| East Asia | 211 | China; Hong Kong; Japan; South Korea; Taiwan | |
|
| Oceania | 26 | Australia; New Zealand | |
|
|
|
|
|
## π₯ Leaderboard of CulturalBench |
|
- We evaluated 30 frontier LLMs (update: 2024-10-04 13:20:58) and hosted the leaderboard at [https://huggingface.co/spaces/kellycyy/CulturalBench](https://huggingface.co/spaces/kellycyy/CulturalBench). |
|
- We find that LLMs are sensitive to such difference in setups (e.g., GPT-4o with 27.3% difference). |
|
- Compared to human performance (92.6% accuracy), CULTURALBENCH-Hard is more challenging for frontier LLMs with the best performing model (GPT-4o) at only 61.5% and the worst (Llama3-8b) at 21.4%. |
|
|
|
|
|
## π Example of CulturalBench |
|
- Examples of questions in two setups: |
|
![image/png](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F65fcaae6e5dc5b0ec1b726cf%2F4LU3Ofl9lzeJGVME3yBMp.png%3C%2Fspan%3E)%3C!-- HTML_TAG_END --> |
|
|
|
|
|
## π» How to load the datasets |
|
|
|
``` |
|
from datasets import load_dataset |
|
|
|
ds_hard = load_dataset("kellycyy/CulturalBench", "CulturalBench-Hard") |
|
ds_easy = load_dataset("kellycyy/CulturalBench", "CulturalBench-Easy") |
|
|
|
``` |