File size: 6,537 Bytes
eaee037
 
 
06c01a7
ace1bc7
 
06c01a7
 
 
 
 
 
 
 
 
 
 
 
 
2f92ee5
f61cb92
06c01a7
f61cb92
 
06c01a7
 
 
2f92ee5
 
ace1bc7
 
86b1104
 
 
 
ace1bc7
 
3619374
 
b5fe1dc
ace1bc7
 
 
86b1104
ace1bc7
 
 
 
 
 
 
 
 
 
 
 
86b1104
 
 
ace1bc7
c55dac0
ace1bc7
2c8bd3d
 
 
 
 
 
 
 
 
 
ace1bc7
 
2c8bd3d
 
 
bf104d5
34312de
bf104d5
fc0b588
2c8bd3d
fc0b588
2c8bd3d
fc0b588
 
 
bf104d5
fc0b588
 
 
2c8bd3d
fc0b588
2c8bd3d
fc0b588
 
bf104d5
2c8bd3d
 
 
fc0b588
2c8bd3d
 
 
 
fc0b588
 
2c8bd3d
fc0b588
2c8bd3d
 
fc0b588
 
2c8bd3d
2f92ee5
ace1bc7
 
 
 
 
 
5438a4b
ace1bc7
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
---
language:
- sr
license: apache-2.0
task_categories:
- question-answering
dataset_info:
  features:
  - name: index
    dtype: int64
  - name: questions
    dtype: string
  - name: options
    sequence: string
  - name: answer
    dtype: string
  - name: answer_index
    dtype: int64
  splits:
  - name: test
    num_bytes: 200846
    num_examples: 1003
  download_size: 139630
  dataset_size: 200846
configs:
- config_name: default
  data_files:
  - split: test
    path: data/oz-eval-*
---

# OZ Eval
![image/png](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F62d01cf580c9d4ceb20254bb%2FFt9w9DNRN_1bzuQC9XNSG.png%3C%2Fspan%3E)

## Dataset Description
OZ Eval (_sr._ Opšte Znanje Evaluacija) dataset was created for the purposes of evaluating General Knowledge of LLM models in Serbian language. 
Data consists of 1k+ high-quality questions and answers which were used as part of entry exams at the Faculty of Philosophy and Faculty of Organizational Sciences, University of Belgrade.
The exams test the General Knowledge of students and were used in the enrollment periods from 2003 to 2024. 

This is a joint work with [@Stopwolf](https://huggingface.co/Stopwolf)!


## Evaluation process
Models are evaluated by the following principle using HuggingFace's library `lighteval`. We supply the model with the following template:
```
Pitanje: {question}

Ponuđeni odgovori:
A. {option_a}
B. {option_b}
C. {option_c}
D. {option_d}
E. {option_e}

Krajnji odgovor:
```
We then compare likelihoods of each letter (`A, B, C, D, E`) and calculate the final accuracy. All evaluations are ran in a 0-shot manner using a **chat template**.

GPT-like models were evaluated by taking top 20 probabilities of the first output token, which were further filtered for letters `A` to `E`. Letter with the highest probability was taken as a final answer. 

Exact code for the task can be found [here](https://github.com/Stopwolf/lighteval/blob/oz-eval/community_tasks/oz_evals.py).

You should run the evaluation with the following command (do not forget to add --use_chat_template):
```
accelerate launch lighteval/run_evals_accelerate.py \
    --model_args "pretrained={MODEL_NAME},trust_remote_code=True" \
    --use_chat_template \
    --tasks "community|serbian_evals:oz_task|0|0" \
    --custom_tasks "/content/lighteval/community_tasks/oz_evals.py" \
    --output_dir "./evals" \
    --override_batch_size 32
```

## Evaluation results
| Model |Size|Accuracy|  |Stderr|
|-------|---:|-------:|--|-----:|
|GPT-4-0125-preview|_???_|0.9199|±|0.002|
|GPT-4o-2024-05-13|_???_|0.9196|±|0.0017|
|[Qwen2-72B-Instruct \[4bit\]](https://huggingface.co/unsloth/Qwen2-72B-Instruct-bnb-4bit)|72B|0.8425|±|0.0115|
|GPT-3.5-turbo-0125|_???_|0.8245|±|0.0016|
|[Llama3.1-70B-Instruct \[4bit\]](https://huggingface.co/unsloth/Meta-Llama-3.1-70B-bnb-4bit)|70B|0.8185|±|0.0122|
|GPT-4o-mini-2024-07-18|_???_|0.7971|±|0.0005|
|[Mustra-7B-Instruct-v0.2](https://huggingface.co/Stopwolf/Mustra-7B-Instruct-v0.2)|7B|0.7388|±|0.0098|
|[Tito-7B-slerp](https://huggingface.co/Stopwolf/Tito-7B-slerp)|7B|0.7099|±|0.0101|
|[Yugo55A-GPT](https://huggingface.co/datatab/Yugo55A-GPT)|7B|0.6889|±|0.0103|
|[Zamfir-7B-slerp](https://huggingface.co/Stopwolf/Zamfir-7B-slerp)|7B|0.6849|±|0.0104|
|[Mistral-Nemo-Instruct-2407](https://huggingface.co/mistralai/Mistral-Nemo-Instruct-2407)|12.2B|0.6839|±|0.0104|
|[Llama-3.1-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct)|8B|0.679|±|0.0147|
|[Qwen2-7B-instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct)|7B|0.673|±|0.0105|
|[SauerkrautLM-Nemo-12b-Instruct](https://huggingface.co/VAGOsolutions/SauerkrautLM-Nemo-12b-Instruct)|12B|0.663|±|0.0106|
|[Llama-3-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct)|8B|0.661|±|0.0106|
|[Yugo60-GPT](https://huggingface.co/datatab/Yugo60-GPT)|7B|0.6411|±|0.0107|
|[Mistral-Small-Instruct-2409 \[4bit\]](https://huggingface.co/unsloth/Mistral-Small-Instruct-2409)|22.2B|0.6361|±|0.0152|
|[DeepSeek-V2-Lite-Chat](https://huggingface.co/deepseek-ai/DeepSeek-V2-Lite-Chat)|15.7B|0.6047|±|0.0109|
|[Llama3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)|8B|0.5972|±|0.0155|
|[Llama3-70B-Instruct \[4bit\]](https://huggingface.co/unsloth/llama-3-70b-Instruct-bnb-4bit)|70B|0.5942|±|0.011|
|[Hermes-3-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B)|8B|0.5932|±|0.0155|
|[Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B)|8B|0.5852|±|0.011|
|[Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3)|7B|0.5753|±| 0.011|
|[openchat-3.6-8b-20240522](https://huggingface.co/openchat/openchat-3.6-8b-20240522)|8B|0.5513|±|0.0111|
|[Qwen1.5-7B-Chat](https://huggingface.co/Qwen/Qwen1.5-7B-Chat)|7B|0.5374|±|0.0158|
|[Llama3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)|8B|0.5274|±|0.0111|
|[Starling-7B-beta](https://huggingface.co/Nexusflow/Starling-LM-7B-beta)|7B|0.5244|±|0.0112|
|[Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)|7B|0.5145|±|0.0112|
|[Qwen2-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2-1.5B-Instruct)|1.5B|0.4506|±|0.0111|
|[falcon-11B](https://huggingface.co/tiiuae/falcon-11B)|11B|0.4477|±|0.0111|
|[falcon-7b-instruct](https://huggingface.co/tiiuae/falcon-7b-instruct)|7B|0.4257|±| 0.011|
|[Perucac-7B-slerp](https://huggingface.co/Stopwolf/Perucac-7B-slerp)|7B|0.4247|±|0.011|
|[Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct)|1.5B|0.3759|±|0.0153|
|[Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct)|3.8B|0.3719|±|0.0108|
|[SambaLingo-Serbian-Chat](https://huggingface.co/sambanovasystems/SambaLingo-Serbian-Chat)|7B|0.2802|±|0.01|
|[Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)|7B|0.2423|±|0.0135|
|[Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct)|0.5B|0.2313|±|0.0133|
|[Gemma-2-9B-it](https://huggingface.co/google/gemma-2-9b-it)|9B|0.2193|±|0.0092|
|[Gemma-2-2B-it](https://huggingface.co/google/gemma-2-2b-it)|2.6B|0.1715|±|0.0084|


### Citation
```
@article{oz-eval,
  author    = "Stanivuk Siniša & Đorđević Milena",
  title     = "OZ Eval: Measuring General Knowledge Skill at University Level of LLMs in Serbian Language",
  year      = "2024"
  howpublished = {\url{https://huggingface.co/datasets/DjMel/oz-eval}},
}
```