|
--- |
|
license: apache-2.0 |
|
language: |
|
- sr |
|
task_categories: |
|
- question-answering |
|
--- |
|
|
|
# OZ Eval |
|
![image/png](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F62d01cf580c9d4ceb20254bb%2FFt9w9DNRN_1bzuQC9XNSG.png%3C%2Fspan%3E)%3C!-- HTML_TAG_END --> |
|
|
|
## Dataset Description |
|
OZ Eval (_sr._ Opšte Znanje Evaluacija) dataset was created for the purposes of evaluating General Knowledge of LLM models in Serbian language. |
|
Data consists of 1k+ high-quality questions and answers which were used as part of entry exams at the Faculty of Philosophy and Faculty of Organizational Sciences, University of Belgrade. |
|
The exams test the General Knowledge of students and were used in the enrollment periods from 2003 to 2024. |
|
|
|
|
|
## Evaluation process |
|
Models are evaluated by the following principle using HuggingFace's library `lighteval`. We supply the model with the following template: |
|
``` |
|
Pitanje: {question} |
|
|
|
Ponuđeni odgovori: |
|
A. {option_a} |
|
B. {option_b} |
|
C. {option_c} |
|
D. {option_d} |
|
E. {option_e} |
|
|
|
Krajnji odgovor: |
|
``` |
|
We then compare likelihoods of each letter (`A, B, C, D, E`) and calculate the final accuracy. All evaluations are ran in a 0-shot manner using a **chat template**. |
|
|
|
GPT-like models were evaluated by taking top 20 probabilities of the first output token, which were further filtered for letters `A` to `E`. Letter with the highest probability was taken as a final answer. |
|
|
|
Exact code for the task will be posted [here](). |
|
|
|
|
|
## Evaluation results |
|
| Model |Accuracy| |Stderr| |
|
|-------|-------:|--|-----:| |
|
|GPT-4-0125-preview|0.9199|±|0.002| |
|
|GPT-4o-2024-05-13|0.9196|±|0.0017| |
|
|GPT-3.5-turbo-0125|0.8245|±|0.0016| |
|
|[Tito-7B-slerp](https://huggingface.co/Stopwolf/Tito-7B-slerp)|0.7099|±|0.0101| |
|
[Qwen2-7B-instruct](https://huggingface.co/Qwen/Qwen2-7B-Instruct)|0.673|±|0.0105| |
|
|[Yugo60-GPT](https://huggingface.co/datatab/Yugo60-GPT)|0.6411|±|0.0107| |
|
|[Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B)|0.5852|±|0.011| |
|
|[Llama3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct)|0.5274|±|0.0111| |
|
|[Hermes-2-Pro-Mistral-7B](https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B)|0.5145|±|0.0112| |
|
|[Perucac-7B-slerp](https://huggingface.co/Stopwolf/Perucac-7B-slerp)|0.4247|±|0.011| |
|
|[SambaLingo-Serbian-Chat](https://huggingface.co/sambanovasystems/SambaLingo-Serbian-Chat)|0.2802|±|0.01| |
|
|[Gemma-2-9B-it](https://huggingface.co/google/gemma-2-9b-it)|0.2193|±|0.0092| |
|
|
|
|
|
### Citation |
|
``` |
|
@article{oz-eval, |
|
author = "Stanivuk Siniša & Đorđević Milena", |
|
title = "Opšte znanje LLM Eval", |
|
year = "2024" |
|
howpublished = {\url{https://huggingface.co/datasets/DjMel/oz-eval}}, |
|
} |
|
``` |