update readme
Browse files
README.md
CHANGED
@@ -1,17 +1,59 @@
|
|
1 |
---
|
2 |
-
license:
|
3 |
dataset_info: null
|
4 |
configs:
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
size_categories:
|
15 |
-
|
16 |
pretty_name: CulturalBench
|
17 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: cc-by-4.0
|
3 |
dataset_info: null
|
4 |
configs:
|
5 |
+
- config_name: CulturalBench-Hard
|
6 |
+
default: true
|
7 |
+
data_files:
|
8 |
+
- split: test
|
9 |
+
path: CulturalBench-Hard.csv
|
10 |
+
- config_name: CulturalBench-Easy
|
11 |
+
data_files:
|
12 |
+
- split: test
|
13 |
+
path: CulturalBench-Easy.csv
|
14 |
size_categories:
|
15 |
+
- 1K<n<10K
|
16 |
pretty_name: CulturalBench
|
17 |
---
|
18 |
+
# CulturalBench - a Robust, Diverse and Challenging Benchmark on Measuring the (Lack of) Cultural Knowledge of LLMs
|
19 |
+
- CulturalBench is a set of 1,227 human-written and human-verified questions for effectively assessing LLMs’ cultural knowledge, covering 45 global regions including the underrepresented ones like Bangladesh, Zimbabwe, and Peru.
|
20 |
+
- We evaluate models on two setups: CulturalBench-Easy and CulturalBench-Hard which share the same questions but asked differently.
|
21 |
+
1. CulturalBench-Easy: multiple-choice questions (Output: one out of four options i.e. A,B,C,D). Evaluate model accuracy at question level (i.e. per `question_idx`). There are 1,227 questions in total.
|
22 |
+
2. CulturalBench-Hard: binary (Output: one out of two possibilties i.e. True/False). Evaluate model accuracy at question level (i.e. per `question_idx`). There are 1,227x4=4908 binary judgements in total with 1,227 questions provided.
|
23 |
+
|
24 |
+
- See details on CulturalBench paper at [https://arxiv.org/pdf/2410.02677](https://arxiv.org/pdf/2410.02677).
|
25 |
+
- Examples of questions in two setups:
|
26 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65fcaae6e5dc5b0ec1b726cf/4LU3Ofl9lzeJGVME3yBMp.png)
|
27 |
+
|
28 |
+
- Country distribution
|
29 |
+
| Continent | Num of questions | Included Country/Region |
|
30 |
+
|-----------------------|------------------|----------------------------------------------------------------|
|
31 |
+
| North America | 27 | Canada; United States |
|
32 |
+
| South America | 150 | Argentina; Brazil; Chile; Mexico; Peru |
|
33 |
+
| East Europe | 115 | Czech Republic; Poland; Romania; Ukraine; Russia |
|
34 |
+
| South Europe | 76 | Spain; Italy |
|
35 |
+
| West Europe | 96 | France; Germany; Netherlands; United Kingdom |
|
36 |
+
| Africa | 134 | Egypt; Morocco; Nigeria; South Africa; Zimbabwe |
|
37 |
+
| Middle East/West Asia | 127 | Iran; Israel; Lebanon; Saudi Arabia; Turkey |
|
38 |
+
| South Asia | 106 | Bangladesh; India; Nepal; Pakistan |
|
39 |
+
| Southeast Asia | 159 | Indonesia; Malaysia; Philippines; Singapore; Thailand; Vietnam |
|
40 |
+
| East Asia | 211 | China; Hong Kong; Japan; South Korea; Taiwan |
|
41 |
+
| Oceania | 26 | Australia; New Zealand |
|
42 |
+
|
43 |
+
|
44 |
+
## Leaderboard of CulturalBench
|
45 |
+
- We evaluated 30 frontier LLMs (update: 2024-10-04 13:20:58) and hosted the leaderboard at [https://huggingface.co/spaces/kellycyy/CulturalBench](https://huggingface.co/spaces/kellycyy/CulturalBench).
|
46 |
+
- We find that LLMs are sensitive to such difference in setups (e.g., GPT-4o with 27.3% difference).
|
47 |
+
- Compared to human performance (92.6% accuracy), CULTURALBENCH-Hard is more challenging for frontier LLMs with the best performing model (GPT-4o) at only 61.5% and the worst (Llama3-8b) at 21.4%.
|
48 |
+
|
49 |
+
|
50 |
+
|
51 |
+
## How to load the datasets
|
52 |
+
|
53 |
+
```
|
54 |
+
from datasets import load_dataset
|
55 |
+
|
56 |
+
ds_hard = load_dataset("kellycyy/CulturalBench", "CulturalBench-Hard")
|
57 |
+
ds_easy = load_dataset("kellycyy/CulturalBench", "CulturalBench-Easy")
|
58 |
+
|
59 |
+
```
|