Datasets:

Modalities:
Tabular
Text
Formats:
csv
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 3,970 Bytes
2413b3c
985325b
2413b3c
 
985325b
 
 
 
 
 
 
 
 
2413b3c
985325b
2413b3c
 
985325b
2d5acf9
 
 
985325b
 
 
 
 
 
 
2d5acf9
 
985325b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2d5acf9
985325b
 
 
 
 
2d5acf9
 
 
985325b
2d5acf9
 
985325b
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
license: cc-by-4.0
dataset_info: null
configs:
- config_name: CulturalBench-Hard
  default: true
  data_files:
  - split: test
    path: CulturalBench-Hard.csv
- config_name: CulturalBench-Easy
  data_files:
  - split: test
    path: CulturalBench-Easy.csv
size_categories:
- 1K<n<10K
pretty_name: CulturalBench
---
# CulturalBench - a Robust, Diverse and Challenging Benchmark on Measuring the (Lack of) Cultural Knowledge of LLMs
## **πŸ“Œ Resources:** [Paper](https://arxiv.org/pdf/2410.02677) | [Leaderboard](https://huggingface.co/spaces/kellycyy/CulturalBench)

## πŸ“˜ Description of CulturalBench
- CulturalBench is a set of 1,227 human-written and human-verified questions for effectively assessing LLMs’ cultural knowledge, covering 45 global regions including the underrepresented ones like Bangladesh, Zimbabwe, and Peru.
- We evaluate models on two setups: CulturalBench-Easy and CulturalBench-Hard which share the same questions but asked differently.
  1. CulturalBench-Easy: multiple-choice questions (Output: one out of four options i.e. A,B,C,D). Evaluate model accuracy at question level (i.e. per `question_idx`). There are 1,227 questions in total.
  2. CulturalBench-Hard: binary (Output: one out of two possibilties i.e. True/False). Evaluate model accuracy at question level (i.e. per `question_idx`). There are 1,227x4=4908 binary judgements in total with 1,227 questions provided.

- See details on CulturalBench paper at [https://arxiv.org/pdf/2410.02677](https://arxiv.org/pdf/2410.02677).


### 🌎 Country distribution
  | Continent             | Num of questions | Included Country/Region                                        |
  |-----------------------|------------------|----------------------------------------------------------------|
  | North America         | 27               | Canada; United States                                          |
  | South America         | 150              | Argentina; Brazil; Chile; Mexico; Peru                         |
  | East Europe           | 115              | Czech Republic; Poland; Romania; Ukraine; Russia               |
  | South Europe          | 76               | Spain; Italy                                                   |
  | West Europe           | 96               | France; Germany; Netherlands; United Kingdom                   |
  | Africa                | 134              | Egypt; Morocco; Nigeria; South Africa; Zimbabwe                |
  | Middle East/West Asia | 127              | Iran; Israel; Lebanon; Saudi Arabia; Turkey                    |
  | South Asia            | 106              | Bangladesh; India; Nepal; Pakistan                             |
  | Southeast Asia        | 159              | Indonesia; Malaysia; Philippines; Singapore; Thailand; Vietnam |
  | East Asia             | 211              | China; Hong Kong; Japan; South Korea; Taiwan                   |
  | Oceania               | 26               | Australia; New Zealand                                         |


## πŸ₯‡ Leaderboard of CulturalBench
- We evaluated 30 frontier LLMs (update: 2024-10-04 13:20:58) and hosted the leaderboard at [https://huggingface.co/spaces/kellycyy/CulturalBench](https://huggingface.co/spaces/kellycyy/CulturalBench).
- We find that LLMs are sensitive to such difference in setups (e.g., GPT-4o with 27.3% difference). 
- Compared to human performance (92.6% accuracy), CULTURALBENCH-Hard is more challenging for frontier LLMs with the best performing model (GPT-4o) at only 61.5% and the worst (Llama3-8b) at 21.4%.


## πŸ“– Example of CulturalBench
- Examples of questions in two setups:
  ![image/png](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F65fcaae6e5dc5b0ec1b726cf%2F4LU3Ofl9lzeJGVME3yBMp.png%3C%2Fspan%3E)

  
## πŸ’» How to load the datasets

```
from datasets import load_dataset

ds_hard = load_dataset("kellycyy/CulturalBench", "CulturalBench-Hard")
ds_easy = load_dataset("kellycyy/CulturalBench", "CulturalBench-Easy")

```