oz-eval / README.md
DjMel's picture
Update README.md
ace1bc7 verified
|
raw
history blame
2.08 kB
metadata
license: apache-2.0
language:
  - sr
task_categories:
  - question-answering

OZ Eval

OZ Eval (sr. Opšte Znanje Evaluacija) dataset was created for the purposes of evaluating General Knowledge of LLM models in Serbian language. Data consists of 1k+ high-quality questions and answers which were used as part of entry exams at the Faculty of Philosophy and Faculty of Organizational Sciences, University of Belgrade. The exams test the General Knowledge of students and were used in the enrollment periods from 2003 to 2024.

Evaluation process

Models are evaluated by the following principle using HuggingFace's library lighteval. We supply the model with the following template:

Pitanje: {question}

Ponuđeni odgovori:
A. {option_a}
B. {option_b}
C. {option_c}
D. {option_d}
E. {option_e}

Krajnji odgovor:

We then compare likelihoods of each letter (A, B, C, D, E) and calculate the final accuracy. All evaluations are ran in a 0-shot manner using a chat template.

Exact code for the task will be posted here.

Evaluation results

Model Accuracy Stderr
Tito-7B-slerp 0.7099 ± 0.0101
Qwen2-7B-instruct 0.673 ± 0.0105
Yugo60-GPT 0.6411 ± 0.0107
Llama3-8B-Instruct 0.5274 ± 0.0111
Hermes-2-Pro-Mistral-7B 0.5145 ± 0.0112
Perucac-7B-slerp 0.4247 ± 0.011
SambaLingo-Serbian-Chat 0.2802 ± 0.01
Gemma-2-9B-it 0.2193 ± 0.0092

Citation

@article{oz-eval,
  author    = "Stanivuk Siniša & Đorđević Milena",
  title     = "Opšte znanje LLM Eval",
  year      = "2024"
  howpublished = {\url{https://huggingface.co/datasets/DjMel/oz-eval}},
}