sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
sequencelengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
sequencelengths 0
25
| languages
sequencelengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
sequencelengths 0
352
| processed_texts
sequencelengths 1
353
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
460e7a581b5b8d02abc21e39d09c0cb8103c837b | # Promoter Sequences and Corresponding Gene Expression data for Maize NAM lines
The data in this dataset has the promoter sequences and the corresponding gene expression data as TPM values for **26 Maize NAM lines** and has been used for the finetuning step *(for the downstream task of gene expression prediction)* of [`Florabert`](https://huggingface.co/Gurveer05/FloraBERT).
The data has been split into train, test and eval data (70-20-10 split). In all, there are ~ 7,00,000 entries across the files. The steps followed to obtain this data are available in this [`Github Repository`](https://github.com/gurveervirk/florabert).
The labels correspond to the TPM values for the various tissues in the order: [
'tassel',
'base',
'anther',
'middle',
'ear',
'shoot',
'tip',
'root'
]. The sequences that have been used are the promoter sequences for genes of Maize NAM lines that have at least 1 TPM value for a tissue > 1. | Gurveer05/maize-nam-gene-expression-data | [
"size_categories:100K<n<1M",
"biology",
"DNA",
"Gene Expression",
"region:us"
] | 2024-01-14T17:37:20+00:00 | {"size_categories": ["100K<n<1M"], "tags": ["biology", "DNA", "Gene Expression"]} | 2024-01-14T18:19:05+00:00 | [] | [] | TAGS
#size_categories-100K<n<1M #biology #DNA #Gene Expression #region-us
| # Promoter Sequences and Corresponding Gene Expression data for Maize NAM lines
The data in this dataset has the promoter sequences and the corresponding gene expression data as TPM values for 26 Maize NAM lines and has been used for the finetuning step *(for the downstream task of gene expression prediction)* of 'Florabert'.
The data has been split into train, test and eval data (70-20-10 split). In all, there are ~ 7,00,000 entries across the files. The steps followed to obtain this data are available in this 'Github Repository'.
The labels correspond to the TPM values for the various tissues in the order: [
'tassel',
'base',
'anther',
'middle',
'ear',
'shoot',
'tip',
'root'
]. The sequences that have been used are the promoter sequences for genes of Maize NAM lines that have at least 1 TPM value for a tissue > 1. | [
"# Promoter Sequences and Corresponding Gene Expression data for Maize NAM lines\n\nThe data in this dataset has the promoter sequences and the corresponding gene expression data as TPM values for 26 Maize NAM lines and has been used for the finetuning step *(for the downstream task of gene expression prediction)* of 'Florabert'.\n\nThe data has been split into train, test and eval data (70-20-10 split). In all, there are ~ 7,00,000 entries across the files. The steps followed to obtain this data are available in this 'Github Repository'. \n\nThe labels correspond to the TPM values for the various tissues in the order: [\n 'tassel', \n 'base', \n 'anther', \n 'middle', \n 'ear', \n 'shoot', \n 'tip', \n 'root' \n]. The sequences that have been used are the promoter sequences for genes of Maize NAM lines that have at least 1 TPM value for a tissue > 1."
] | [
"TAGS\n#size_categories-100K<n<1M #biology #DNA #Gene Expression #region-us \n",
"# Promoter Sequences and Corresponding Gene Expression data for Maize NAM lines\n\nThe data in this dataset has the promoter sequences and the corresponding gene expression data as TPM values for 26 Maize NAM lines and has been used for the finetuning step *(for the downstream task of gene expression prediction)* of 'Florabert'.\n\nThe data has been split into train, test and eval data (70-20-10 split). In all, there are ~ 7,00,000 entries across the files. The steps followed to obtain this data are available in this 'Github Repository'. \n\nThe labels correspond to the TPM values for the various tissues in the order: [\n 'tassel', \n 'base', \n 'anther', \n 'middle', \n 'ear', \n 'shoot', \n 'tip', \n 'root' \n]. The sequences that have been used are the promoter sequences for genes of Maize NAM lines that have at least 1 TPM value for a tissue > 1."
] |
2288120a7c13ff4ab40cf6de8e5a2e237c723f3d |
# Dataset Card for Evaluation run of huangyt/Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [huangyt/Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down](https://huggingface.co/huangyt/Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_huangyt__Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-14T17:36:45.221009](https://huggingface.co/datasets/open-llm-leaderboard/details_huangyt__Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down/blob/main/results_2024-01-14T17-36-45.221009.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6355178040599482,
"acc_stderr": 0.03241610229663876,
"acc_norm": 0.641571442422577,
"acc_norm_stderr": 0.033065020971592085,
"mc1": 0.3047735618115055,
"mc1_stderr": 0.016114124156882452,
"mc2": 0.45435317672164416,
"mc2_stderr": 0.014528686611193308
},
"harness|arc:challenge|25": {
"acc": 0.5665529010238908,
"acc_stderr": 0.014481376224558902,
"acc_norm": 0.6126279863481229,
"acc_norm_stderr": 0.014235872487909872
},
"harness|hellaswag|10": {
"acc": 0.6271659032065325,
"acc_stderr": 0.004825702533920412,
"acc_norm": 0.8319059948217487,
"acc_norm_stderr": 0.0037318549570309373
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6148148148148148,
"acc_stderr": 0.04203921040156279,
"acc_norm": 0.6148148148148148,
"acc_norm_stderr": 0.04203921040156279
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6842105263157895,
"acc_stderr": 0.0378272898086547,
"acc_norm": 0.6842105263157895,
"acc_norm_stderr": 0.0378272898086547
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6792452830188679,
"acc_stderr": 0.02872750295788027,
"acc_norm": 0.6792452830188679,
"acc_norm_stderr": 0.02872750295788027
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7013888888888888,
"acc_stderr": 0.03827052357950756,
"acc_norm": 0.7013888888888888,
"acc_norm_stderr": 0.03827052357950756
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.41,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.41,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.653179190751445,
"acc_stderr": 0.036291466701596636,
"acc_norm": 0.653179190751445,
"acc_norm_stderr": 0.036291466701596636
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107223,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107223
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.8,
"acc_stderr": 0.04020151261036845,
"acc_norm": 0.8,
"acc_norm_stderr": 0.04020151261036845
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.574468085106383,
"acc_stderr": 0.03232146916224468,
"acc_norm": 0.574468085106383,
"acc_norm_stderr": 0.03232146916224468
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4649122807017544,
"acc_stderr": 0.04692008381368909,
"acc_norm": 0.4649122807017544,
"acc_norm_stderr": 0.04692008381368909
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5379310344827586,
"acc_stderr": 0.04154659671707548,
"acc_norm": 0.5379310344827586,
"acc_norm_stderr": 0.04154659671707548
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.43386243386243384,
"acc_stderr": 0.025525034382474884,
"acc_norm": 0.43386243386243384,
"acc_norm_stderr": 0.025525034382474884
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4603174603174603,
"acc_stderr": 0.04458029125470973,
"acc_norm": 0.4603174603174603,
"acc_norm_stderr": 0.04458029125470973
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.41,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.41,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7451612903225806,
"acc_stderr": 0.024790118459332208,
"acc_norm": 0.7451612903225806,
"acc_norm_stderr": 0.024790118459332208
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5221674876847291,
"acc_stderr": 0.035145285621750094,
"acc_norm": 0.5221674876847291,
"acc_norm_stderr": 0.035145285621750094
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.65,
"acc_stderr": 0.04793724854411019,
"acc_norm": 0.65,
"acc_norm_stderr": 0.04793724854411019
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7818181818181819,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.7818181818181819,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7676767676767676,
"acc_stderr": 0.03008862949021749,
"acc_norm": 0.7676767676767676,
"acc_norm_stderr": 0.03008862949021749
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8601036269430051,
"acc_stderr": 0.025033870583015184,
"acc_norm": 0.8601036269430051,
"acc_norm_stderr": 0.025033870583015184
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6282051282051282,
"acc_stderr": 0.024503472557110936,
"acc_norm": 0.6282051282051282,
"acc_norm_stderr": 0.024503472557110936
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.36666666666666664,
"acc_stderr": 0.029381620726465076,
"acc_norm": 0.36666666666666664,
"acc_norm_stderr": 0.029381620726465076
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6596638655462185,
"acc_stderr": 0.030778057422931673,
"acc_norm": 0.6596638655462185,
"acc_norm_stderr": 0.030778057422931673
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3509933774834437,
"acc_stderr": 0.03896981964257375,
"acc_norm": 0.3509933774834437,
"acc_norm_stderr": 0.03896981964257375
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8348623853211009,
"acc_stderr": 0.015919557829976044,
"acc_norm": 0.8348623853211009,
"acc_norm_stderr": 0.015919557829976044
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5092592592592593,
"acc_stderr": 0.034093869469927006,
"acc_norm": 0.5092592592592593,
"acc_norm_stderr": 0.034093869469927006
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7843137254901961,
"acc_stderr": 0.02886743144984932,
"acc_norm": 0.7843137254901961,
"acc_norm_stderr": 0.02886743144984932
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7974683544303798,
"acc_stderr": 0.026160568246601453,
"acc_norm": 0.7974683544303798,
"acc_norm_stderr": 0.026160568246601453
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6771300448430493,
"acc_stderr": 0.031381476375754995,
"acc_norm": 0.6771300448430493,
"acc_norm_stderr": 0.031381476375754995
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7709923664122137,
"acc_stderr": 0.036853466317118506,
"acc_norm": 0.7709923664122137,
"acc_norm_stderr": 0.036853466317118506
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8099173553719008,
"acc_stderr": 0.03581796951709282,
"acc_norm": 0.8099173553719008,
"acc_norm_stderr": 0.03581796951709282
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252627,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252627
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7668711656441718,
"acc_stderr": 0.0332201579577674,
"acc_norm": 0.7668711656441718,
"acc_norm_stderr": 0.0332201579577674
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.48214285714285715,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.48214285714285715,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.8349514563106796,
"acc_stderr": 0.036756688322331886,
"acc_norm": 0.8349514563106796,
"acc_norm_stderr": 0.036756688322331886
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8547008547008547,
"acc_stderr": 0.023086635086841407,
"acc_norm": 0.8547008547008547,
"acc_norm_stderr": 0.023086635086841407
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8186462324393359,
"acc_stderr": 0.013778693778464085,
"acc_norm": 0.8186462324393359,
"acc_norm_stderr": 0.013778693778464085
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7341040462427746,
"acc_stderr": 0.02378620325550829,
"acc_norm": 0.7341040462427746,
"acc_norm_stderr": 0.02378620325550829
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.38212290502793295,
"acc_stderr": 0.016251139711570762,
"acc_norm": 0.38212290502793295,
"acc_norm_stderr": 0.016251139711570762
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.761437908496732,
"acc_stderr": 0.024404394928087873,
"acc_norm": 0.761437908496732,
"acc_norm_stderr": 0.024404394928087873
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7170418006430869,
"acc_stderr": 0.025583062489984813,
"acc_norm": 0.7170418006430869,
"acc_norm_stderr": 0.025583062489984813
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7438271604938271,
"acc_stderr": 0.024288533637726095,
"acc_norm": 0.7438271604938271,
"acc_norm_stderr": 0.024288533637726095
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.48936170212765956,
"acc_stderr": 0.029820747191422473,
"acc_norm": 0.48936170212765956,
"acc_norm_stderr": 0.029820747191422473
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.46870925684485004,
"acc_stderr": 0.012745204626083143,
"acc_norm": 0.46870925684485004,
"acc_norm_stderr": 0.012745204626083143
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6801470588235294,
"acc_stderr": 0.02833295951403121,
"acc_norm": 0.6801470588235294,
"acc_norm_stderr": 0.02833295951403121
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6633986928104575,
"acc_stderr": 0.019117213911495155,
"acc_norm": 0.6633986928104575,
"acc_norm_stderr": 0.019117213911495155
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6545454545454545,
"acc_stderr": 0.04554619617541054,
"acc_norm": 0.6545454545454545,
"acc_norm_stderr": 0.04554619617541054
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.689795918367347,
"acc_stderr": 0.029613459872484378,
"acc_norm": 0.689795918367347,
"acc_norm_stderr": 0.029613459872484378
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.026193923544454132,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.026193923544454132
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.88,
"acc_stderr": 0.03265986323710905,
"acc_norm": 0.88,
"acc_norm_stderr": 0.03265986323710905
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5602409638554217,
"acc_stderr": 0.03864139923699122,
"acc_norm": 0.5602409638554217,
"acc_norm_stderr": 0.03864139923699122
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8187134502923976,
"acc_stderr": 0.02954774168764004,
"acc_norm": 0.8187134502923976,
"acc_norm_stderr": 0.02954774168764004
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3047735618115055,
"mc1_stderr": 0.016114124156882452,
"mc2": 0.45435317672164416,
"mc2_stderr": 0.014528686611193308
},
"harness|winogrande|5": {
"acc": 0.7734806629834254,
"acc_stderr": 0.011764149054698332
},
"harness|gsm8k|5": {
"acc": 0.3912054586808188,
"acc_stderr": 0.0134425024027943
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_huangyt__Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down | [
"region:us"
] | 2024-01-14T17:39:04+00:00 | {"pretty_name": "Evaluation run of huangyt/Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down", "dataset_summary": "Dataset automatically created during the evaluation run of model [huangyt/Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down](https://huggingface.co/huangyt/Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_huangyt__Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-14T17:36:45.221009](https://huggingface.co/datasets/open-llm-leaderboard/details_huangyt__Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down/blob/main/results_2024-01-14T17-36-45.221009.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6355178040599482,\n \"acc_stderr\": 0.03241610229663876,\n \"acc_norm\": 0.641571442422577,\n \"acc_norm_stderr\": 0.033065020971592085,\n \"mc1\": 0.3047735618115055,\n \"mc1_stderr\": 0.016114124156882452,\n \"mc2\": 0.45435317672164416,\n \"mc2_stderr\": 0.014528686611193308\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5665529010238908,\n \"acc_stderr\": 0.014481376224558902,\n \"acc_norm\": 0.6126279863481229,\n \"acc_norm_stderr\": 0.014235872487909872\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6271659032065325,\n \"acc_stderr\": 0.004825702533920412,\n \"acc_norm\": 0.8319059948217487,\n \"acc_norm_stderr\": 0.0037318549570309373\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6148148148148148,\n \"acc_stderr\": 0.04203921040156279,\n \"acc_norm\": 0.6148148148148148,\n \"acc_norm_stderr\": 0.04203921040156279\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6842105263157895,\n \"acc_stderr\": 0.0378272898086547,\n \"acc_norm\": 0.6842105263157895,\n \"acc_norm_stderr\": 0.0378272898086547\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6792452830188679,\n \"acc_stderr\": 0.02872750295788027,\n \"acc_norm\": 0.6792452830188679,\n \"acc_norm_stderr\": 0.02872750295788027\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7013888888888888,\n \"acc_stderr\": 0.03827052357950756,\n \"acc_norm\": 0.7013888888888888,\n \"acc_norm_stderr\": 0.03827052357950756\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.653179190751445,\n \"acc_stderr\": 0.036291466701596636,\n \"acc_norm\": 0.653179190751445,\n \"acc_norm_stderr\": 0.036291466701596636\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036845,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.04020151261036845\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.574468085106383,\n \"acc_stderr\": 0.03232146916224468,\n \"acc_norm\": 0.574468085106383,\n \"acc_norm_stderr\": 0.03232146916224468\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4649122807017544,\n \"acc_stderr\": 0.04692008381368909,\n \"acc_norm\": 0.4649122807017544,\n \"acc_norm_stderr\": 0.04692008381368909\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5379310344827586,\n \"acc_stderr\": 0.04154659671707548,\n \"acc_norm\": 0.5379310344827586,\n \"acc_norm_stderr\": 0.04154659671707548\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.43386243386243384,\n \"acc_stderr\": 0.025525034382474884,\n \"acc_norm\": 0.43386243386243384,\n \"acc_norm_stderr\": 0.025525034382474884\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4603174603174603,\n \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.4603174603174603,\n \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7451612903225806,\n \"acc_stderr\": 0.024790118459332208,\n \"acc_norm\": 0.7451612903225806,\n \"acc_norm_stderr\": 0.024790118459332208\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5221674876847291,\n \"acc_stderr\": 0.035145285621750094,\n \"acc_norm\": 0.5221674876847291,\n \"acc_norm_stderr\": 0.035145285621750094\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.04793724854411019,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.04793724854411019\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7818181818181819,\n \"acc_stderr\": 0.03225078108306289,\n \"acc_norm\": 0.7818181818181819,\n \"acc_norm_stderr\": 0.03225078108306289\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7676767676767676,\n \"acc_stderr\": 0.03008862949021749,\n \"acc_norm\": 0.7676767676767676,\n \"acc_norm_stderr\": 0.03008862949021749\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8601036269430051,\n \"acc_stderr\": 0.025033870583015184,\n \"acc_norm\": 0.8601036269430051,\n \"acc_norm_stderr\": 0.025033870583015184\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6282051282051282,\n \"acc_stderr\": 0.024503472557110936,\n \"acc_norm\": 0.6282051282051282,\n \"acc_norm_stderr\": 0.024503472557110936\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.36666666666666664,\n \"acc_stderr\": 0.029381620726465076,\n \"acc_norm\": 0.36666666666666664,\n \"acc_norm_stderr\": 0.029381620726465076\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6596638655462185,\n \"acc_stderr\": 0.030778057422931673,\n \"acc_norm\": 0.6596638655462185,\n \"acc_norm_stderr\": 0.030778057422931673\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8348623853211009,\n \"acc_stderr\": 0.015919557829976044,\n \"acc_norm\": 0.8348623853211009,\n \"acc_norm_stderr\": 0.015919557829976044\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5092592592592593,\n \"acc_stderr\": 0.034093869469927006,\n \"acc_norm\": 0.5092592592592593,\n \"acc_norm_stderr\": 0.034093869469927006\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7843137254901961,\n \"acc_stderr\": 0.02886743144984932,\n \"acc_norm\": 0.7843137254901961,\n \"acc_norm_stderr\": 0.02886743144984932\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7974683544303798,\n \"acc_stderr\": 0.026160568246601453,\n \"acc_norm\": 0.7974683544303798,\n \"acc_norm_stderr\": 0.026160568246601453\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n \"acc_stderr\": 0.031381476375754995,\n \"acc_norm\": 0.6771300448430493,\n \"acc_norm_stderr\": 0.031381476375754995\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7709923664122137,\n \"acc_stderr\": 0.036853466317118506,\n \"acc_norm\": 0.7709923664122137,\n \"acc_norm_stderr\": 0.036853466317118506\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8099173553719008,\n \"acc_stderr\": 0.03581796951709282,\n \"acc_norm\": 0.8099173553719008,\n \"acc_norm_stderr\": 0.03581796951709282\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n \"acc_stderr\": 0.04077494709252627,\n \"acc_norm\": 0.7685185185185185,\n \"acc_norm_stderr\": 0.04077494709252627\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7668711656441718,\n \"acc_stderr\": 0.0332201579577674,\n \"acc_norm\": 0.7668711656441718,\n \"acc_norm_stderr\": 0.0332201579577674\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8349514563106796,\n \"acc_stderr\": 0.036756688322331886,\n \"acc_norm\": 0.8349514563106796,\n \"acc_norm_stderr\": 0.036756688322331886\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8547008547008547,\n \"acc_stderr\": 0.023086635086841407,\n \"acc_norm\": 0.8547008547008547,\n \"acc_norm_stderr\": 0.023086635086841407\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8186462324393359,\n \"acc_stderr\": 0.013778693778464085,\n \"acc_norm\": 0.8186462324393359,\n \"acc_norm_stderr\": 0.013778693778464085\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7341040462427746,\n \"acc_stderr\": 0.02378620325550829,\n \"acc_norm\": 0.7341040462427746,\n \"acc_norm_stderr\": 0.02378620325550829\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.38212290502793295,\n \"acc_stderr\": 0.016251139711570762,\n \"acc_norm\": 0.38212290502793295,\n \"acc_norm_stderr\": 0.016251139711570762\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.761437908496732,\n \"acc_stderr\": 0.024404394928087873,\n \"acc_norm\": 0.761437908496732,\n \"acc_norm_stderr\": 0.024404394928087873\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7170418006430869,\n \"acc_stderr\": 0.025583062489984813,\n \"acc_norm\": 0.7170418006430869,\n \"acc_norm_stderr\": 0.025583062489984813\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7438271604938271,\n \"acc_stderr\": 0.024288533637726095,\n \"acc_norm\": 0.7438271604938271,\n \"acc_norm_stderr\": 0.024288533637726095\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.48936170212765956,\n \"acc_stderr\": 0.029820747191422473,\n \"acc_norm\": 0.48936170212765956,\n \"acc_norm_stderr\": 0.029820747191422473\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.46870925684485004,\n \"acc_stderr\": 0.012745204626083143,\n \"acc_norm\": 0.46870925684485004,\n \"acc_norm_stderr\": 0.012745204626083143\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6801470588235294,\n \"acc_stderr\": 0.02833295951403121,\n \"acc_norm\": 0.6801470588235294,\n \"acc_norm_stderr\": 0.02833295951403121\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6633986928104575,\n \"acc_stderr\": 0.019117213911495155,\n \"acc_norm\": 0.6633986928104575,\n \"acc_norm_stderr\": 0.019117213911495155\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.689795918367347,\n \"acc_stderr\": 0.029613459872484378,\n \"acc_norm\": 0.689795918367347,\n \"acc_norm_stderr\": 0.029613459872484378\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n \"acc_stderr\": 0.026193923544454132,\n \"acc_norm\": 0.835820895522388,\n \"acc_norm_stderr\": 0.026193923544454132\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.88,\n \"acc_stderr\": 0.03265986323710905,\n \"acc_norm\": 0.88,\n \"acc_norm_stderr\": 0.03265986323710905\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5602409638554217,\n \"acc_stderr\": 0.03864139923699122,\n \"acc_norm\": 0.5602409638554217,\n \"acc_norm_stderr\": 0.03864139923699122\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8187134502923976,\n \"acc_stderr\": 0.02954774168764004,\n \"acc_norm\": 0.8187134502923976,\n \"acc_norm_stderr\": 0.02954774168764004\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3047735618115055,\n \"mc1_stderr\": 0.016114124156882452,\n \"mc2\": 0.45435317672164416,\n \"mc2_stderr\": 0.014528686611193308\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7734806629834254,\n \"acc_stderr\": 0.011764149054698332\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3912054586808188,\n \"acc_stderr\": 0.0134425024027943\n }\n}\n```", "repo_url": "https://huggingface.co/huangyt/Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|arc:challenge|25_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|gsm8k|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hellaswag|10_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T17-36-45.221009.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["**/details_harness|winogrande|5_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-14T17-36-45.221009.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_14T17_36_45.221009", "path": ["results_2024-01-14T17-36-45.221009.parquet"]}, {"split": "latest", "path": ["results_2024-01-14T17-36-45.221009.parquet"]}]}]} | 2024-01-14T17:39:25+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of huangyt/Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down
Dataset automatically created during the evaluation run of model huangyt/Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-14T17:36:45.221009(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of huangyt/Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down\n\n\n\nDataset automatically created during the evaluation run of model huangyt/Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T17:36:45.221009(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of huangyt/Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down\n\n\n\nDataset automatically created during the evaluation run of model huangyt/Mistral-7B-v0.1-Open-Platypus_2.5w-r16-gate_up_down on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T17:36:45.221009(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
85c6d3e1ba23b2fe67b062ca7cc0c6ed8ae6666c |
# Dataset Card for Evaluation run of one-man-army/UNA-34Beagles-32K-bf16-v1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [one-man-army/UNA-34Beagles-32K-bf16-v1](https://huggingface.co/one-man-army/UNA-34Beagles-32K-bf16-v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_one-man-army__UNA-34Beagles-32K-bf16-v1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-14T18:01:24.840782](https://huggingface.co/datasets/open-llm-leaderboard/details_one-man-army__UNA-34Beagles-32K-bf16-v1/blob/main/results_2024-01-14T18-01-24.840782.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7603825099190668,
"acc_stderr": 0.028403734149400593,
"acc_norm": 0.7656218376316938,
"acc_norm_stderr": 0.02893068310994367,
"mc1": 0.5887392900856793,
"mc1_stderr": 0.01722562708366087,
"mc2": 0.7354905615781797,
"mc2_stderr": 0.014104277111112697
},
"harness|arc:challenge|25": {
"acc": 0.7047781569965871,
"acc_stderr": 0.01332975029338232,
"acc_norm": 0.735494880546075,
"acc_norm_stderr": 0.012889272949313368
},
"harness|hellaswag|10": {
"acc": 0.6716789484166501,
"acc_stderr": 0.004686425851253278,
"acc_norm": 0.85929097789285,
"acc_norm_stderr": 0.00347010499020439
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956911,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956911
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.7481481481481481,
"acc_stderr": 0.03749850709174021,
"acc_norm": 0.7481481481481481,
"acc_norm_stderr": 0.03749850709174021
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.8552631578947368,
"acc_stderr": 0.028631951845930384,
"acc_norm": 0.8552631578947368,
"acc_norm_stderr": 0.028631951845930384
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816505,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816505
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.8150943396226416,
"acc_stderr": 0.023893351834464317,
"acc_norm": 0.8150943396226416,
"acc_norm_stderr": 0.023893351834464317
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8680555555555556,
"acc_stderr": 0.02830096838204443,
"acc_norm": 0.8680555555555556,
"acc_norm_stderr": 0.02830096838204443
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.64,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.64,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.4,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.4,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.7398843930635838,
"acc_stderr": 0.03345036916788992,
"acc_norm": 0.7398843930635838,
"acc_norm_stderr": 0.03345036916788992
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.5588235294117647,
"acc_stderr": 0.049406356306056595,
"acc_norm": 0.5588235294117647,
"acc_norm_stderr": 0.049406356306056595
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.82,
"acc_stderr": 0.03861229196653695,
"acc_norm": 0.82,
"acc_norm_stderr": 0.03861229196653695
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.7914893617021277,
"acc_stderr": 0.02655698211783874,
"acc_norm": 0.7914893617021277,
"acc_norm_stderr": 0.02655698211783874
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5877192982456141,
"acc_stderr": 0.04630653203366596,
"acc_norm": 0.5877192982456141,
"acc_norm_stderr": 0.04630653203366596
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.7448275862068966,
"acc_stderr": 0.03632984052707842,
"acc_norm": 0.7448275862068966,
"acc_norm_stderr": 0.03632984052707842
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.7142857142857143,
"acc_stderr": 0.02326651221373058,
"acc_norm": 0.7142857142857143,
"acc_norm_stderr": 0.02326651221373058
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5873015873015873,
"acc_stderr": 0.04403438954768176,
"acc_norm": 0.5873015873015873,
"acc_norm_stderr": 0.04403438954768176
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.59,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.59,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.9096774193548387,
"acc_stderr": 0.016306570644488323,
"acc_norm": 0.9096774193548387,
"acc_norm_stderr": 0.016306570644488323
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.6305418719211823,
"acc_stderr": 0.033959703819985726,
"acc_norm": 0.6305418719211823,
"acc_norm_stderr": 0.033959703819985726
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.81,
"acc_stderr": 0.03942772444036623,
"acc_norm": 0.81,
"acc_norm_stderr": 0.03942772444036623
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8666666666666667,
"acc_stderr": 0.026544435312706463,
"acc_norm": 0.8666666666666667,
"acc_norm_stderr": 0.026544435312706463
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.9191919191919192,
"acc_stderr": 0.019417681889724536,
"acc_norm": 0.9191919191919192,
"acc_norm_stderr": 0.019417681889724536
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9585492227979274,
"acc_stderr": 0.01438543285747644,
"acc_norm": 0.9585492227979274,
"acc_norm_stderr": 0.01438543285747644
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.8179487179487179,
"acc_stderr": 0.019565236782930893,
"acc_norm": 0.8179487179487179,
"acc_norm_stderr": 0.019565236782930893
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.45185185185185184,
"acc_stderr": 0.030343862998512623,
"acc_norm": 0.45185185185185184,
"acc_norm_stderr": 0.030343862998512623
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.8445378151260504,
"acc_stderr": 0.023536818625398897,
"acc_norm": 0.8445378151260504,
"acc_norm_stderr": 0.023536818625398897
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.4768211920529801,
"acc_stderr": 0.04078093859163083,
"acc_norm": 0.4768211920529801,
"acc_norm_stderr": 0.04078093859163083
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9211009174311927,
"acc_stderr": 0.0115581981137696,
"acc_norm": 0.9211009174311927,
"acc_norm_stderr": 0.0115581981137696
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6805555555555556,
"acc_stderr": 0.03179876342176851,
"acc_norm": 0.6805555555555556,
"acc_norm_stderr": 0.03179876342176851
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9117647058823529,
"acc_stderr": 0.019907399791316945,
"acc_norm": 0.9117647058823529,
"acc_norm_stderr": 0.019907399791316945
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.9071729957805907,
"acc_stderr": 0.01888975055095671,
"acc_norm": 0.9071729957805907,
"acc_norm_stderr": 0.01888975055095671
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7847533632286996,
"acc_stderr": 0.02758406660220827,
"acc_norm": 0.7847533632286996,
"acc_norm_stderr": 0.02758406660220827
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8702290076335878,
"acc_stderr": 0.029473649496907065,
"acc_norm": 0.8702290076335878,
"acc_norm_stderr": 0.029473649496907065
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.9008264462809917,
"acc_stderr": 0.027285246312758957,
"acc_norm": 0.9008264462809917,
"acc_norm_stderr": 0.027285246312758957
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8981481481481481,
"acc_stderr": 0.029239272675632748,
"acc_norm": 0.8981481481481481,
"acc_norm_stderr": 0.029239272675632748
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.8650306748466258,
"acc_stderr": 0.026845765054553838,
"acc_norm": 0.8650306748466258,
"acc_norm_stderr": 0.026845765054553838
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5357142857142857,
"acc_stderr": 0.04733667890053756,
"acc_norm": 0.5357142857142857,
"acc_norm_stderr": 0.04733667890053756
},
"harness|hendrycksTest-management|5": {
"acc": 0.8543689320388349,
"acc_stderr": 0.034926064766237906,
"acc_norm": 0.8543689320388349,
"acc_norm_stderr": 0.034926064766237906
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9444444444444444,
"acc_stderr": 0.015006312806446912,
"acc_norm": 0.9444444444444444,
"acc_norm_stderr": 0.015006312806446912
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.9,
"acc_stderr": 0.030151134457776334,
"acc_norm": 0.9,
"acc_norm_stderr": 0.030151134457776334
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.9042145593869731,
"acc_stderr": 0.010524031079055831,
"acc_norm": 0.9042145593869731,
"acc_norm_stderr": 0.010524031079055831
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.8121387283236994,
"acc_stderr": 0.021029269752423203,
"acc_norm": 0.8121387283236994,
"acc_norm_stderr": 0.021029269752423203
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.7854748603351955,
"acc_stderr": 0.013728923407828855,
"acc_norm": 0.7854748603351955,
"acc_norm_stderr": 0.013728923407828855
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.8496732026143791,
"acc_stderr": 0.020464175124332625,
"acc_norm": 0.8496732026143791,
"acc_norm_stderr": 0.020464175124332625
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.8102893890675241,
"acc_stderr": 0.022268196258783228,
"acc_norm": 0.8102893890675241,
"acc_norm_stderr": 0.022268196258783228
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8672839506172839,
"acc_stderr": 0.018877353839571842,
"acc_norm": 0.8672839506172839,
"acc_norm_stderr": 0.018877353839571842
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.6276595744680851,
"acc_stderr": 0.02883892147125145,
"acc_norm": 0.6276595744680851,
"acc_norm_stderr": 0.02883892147125145
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5788787483702738,
"acc_stderr": 0.012610325733489905,
"acc_norm": 0.5788787483702738,
"acc_norm_stderr": 0.012610325733489905
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.8198529411764706,
"acc_stderr": 0.02334516361654485,
"acc_norm": 0.8198529411764706,
"acc_norm_stderr": 0.02334516361654485
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.815359477124183,
"acc_stderr": 0.015697029240757776,
"acc_norm": 0.815359477124183,
"acc_norm_stderr": 0.015697029240757776
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7363636363636363,
"acc_stderr": 0.04220224692971987,
"acc_norm": 0.7363636363636363,
"acc_norm_stderr": 0.04220224692971987
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8163265306122449,
"acc_stderr": 0.02478907133200765,
"acc_norm": 0.8163265306122449,
"acc_norm_stderr": 0.02478907133200765
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8855721393034826,
"acc_stderr": 0.022509345325101706,
"acc_norm": 0.8855721393034826,
"acc_norm_stderr": 0.022509345325101706
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.89,
"acc_stderr": 0.03144660377352203,
"acc_norm": 0.89,
"acc_norm_stderr": 0.03144660377352203
},
"harness|hendrycksTest-virology|5": {
"acc": 0.572289156626506,
"acc_stderr": 0.038515976837185335,
"acc_norm": 0.572289156626506,
"acc_norm_stderr": 0.038515976837185335
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8947368421052632,
"acc_stderr": 0.023537557657892567,
"acc_norm": 0.8947368421052632,
"acc_norm_stderr": 0.023537557657892567
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5887392900856793,
"mc1_stderr": 0.01722562708366087,
"mc2": 0.7354905615781797,
"mc2_stderr": 0.014104277111112697
},
"harness|winogrande|5": {
"acc": 0.829518547750592,
"acc_stderr": 0.0105690211228259
},
"harness|gsm8k|5": {
"acc": 0.6004548900682335,
"acc_stderr": 0.013491660298815985
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_one-man-army__UNA-34Beagles-32K-bf16-v1 | [
"region:us"
] | 2024-01-14T18:03:41+00:00 | {"pretty_name": "Evaluation run of one-man-army/UNA-34Beagles-32K-bf16-v1", "dataset_summary": "Dataset automatically created during the evaluation run of model [one-man-army/UNA-34Beagles-32K-bf16-v1](https://huggingface.co/one-man-army/UNA-34Beagles-32K-bf16-v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_one-man-army__UNA-34Beagles-32K-bf16-v1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-14T18:01:24.840782](https://huggingface.co/datasets/open-llm-leaderboard/details_one-man-army__UNA-34Beagles-32K-bf16-v1/blob/main/results_2024-01-14T18-01-24.840782.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7603825099190668,\n \"acc_stderr\": 0.028403734149400593,\n \"acc_norm\": 0.7656218376316938,\n \"acc_norm_stderr\": 0.02893068310994367,\n \"mc1\": 0.5887392900856793,\n \"mc1_stderr\": 0.01722562708366087,\n \"mc2\": 0.7354905615781797,\n \"mc2_stderr\": 0.014104277111112697\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.7047781569965871,\n \"acc_stderr\": 0.01332975029338232,\n \"acc_norm\": 0.735494880546075,\n \"acc_norm_stderr\": 0.012889272949313368\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6716789484166501,\n \"acc_stderr\": 0.004686425851253278,\n \"acc_norm\": 0.85929097789285,\n \"acc_norm_stderr\": 0.00347010499020439\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956911,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956911\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.7481481481481481,\n \"acc_stderr\": 0.03749850709174021,\n \"acc_norm\": 0.7481481481481481,\n \"acc_norm_stderr\": 0.03749850709174021\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.8552631578947368,\n \"acc_stderr\": 0.028631951845930384,\n \"acc_norm\": 0.8552631578947368,\n \"acc_norm_stderr\": 0.028631951845930384\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816505,\n \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.04229525846816505\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.8150943396226416,\n \"acc_stderr\": 0.023893351834464317,\n \"acc_norm\": 0.8150943396226416,\n \"acc_norm_stderr\": 0.023893351834464317\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8680555555555556,\n \"acc_stderr\": 0.02830096838204443,\n \"acc_norm\": 0.8680555555555556,\n \"acc_norm_stderr\": 0.02830096838204443\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.64,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\": 0.64,\n \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7398843930635838,\n \"acc_stderr\": 0.03345036916788992,\n \"acc_norm\": 0.7398843930635838,\n \"acc_norm_stderr\": 0.03345036916788992\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.5588235294117647,\n \"acc_stderr\": 0.049406356306056595,\n \"acc_norm\": 0.5588235294117647,\n \"acc_norm_stderr\": 0.049406356306056595\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.82,\n \"acc_stderr\": 0.03861229196653695,\n \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.03861229196653695\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.7914893617021277,\n \"acc_stderr\": 0.02655698211783874,\n \"acc_norm\": 0.7914893617021277,\n \"acc_norm_stderr\": 0.02655698211783874\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5877192982456141,\n \"acc_stderr\": 0.04630653203366596,\n \"acc_norm\": 0.5877192982456141,\n \"acc_norm_stderr\": 0.04630653203366596\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.7448275862068966,\n \"acc_stderr\": 0.03632984052707842,\n \"acc_norm\": 0.7448275862068966,\n \"acc_norm_stderr\": 0.03632984052707842\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.02326651221373058,\n \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.02326651221373058\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5873015873015873,\n \"acc_stderr\": 0.04403438954768176,\n \"acc_norm\": 0.5873015873015873,\n \"acc_norm_stderr\": 0.04403438954768176\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.59,\n \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.59,\n \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.9096774193548387,\n \"acc_stderr\": 0.016306570644488323,\n \"acc_norm\": 0.9096774193548387,\n \"acc_norm_stderr\": 0.016306570644488323\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.6305418719211823,\n \"acc_stderr\": 0.033959703819985726,\n \"acc_norm\": 0.6305418719211823,\n \"acc_norm_stderr\": 0.033959703819985726\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.81,\n \"acc_stderr\": 0.03942772444036623,\n \"acc_norm\": 0.81,\n \"acc_norm_stderr\": 0.03942772444036623\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8666666666666667,\n \"acc_stderr\": 0.026544435312706463,\n \"acc_norm\": 0.8666666666666667,\n \"acc_norm_stderr\": 0.026544435312706463\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.9191919191919192,\n \"acc_stderr\": 0.019417681889724536,\n \"acc_norm\": 0.9191919191919192,\n \"acc_norm_stderr\": 0.019417681889724536\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9585492227979274,\n \"acc_stderr\": 0.01438543285747644,\n \"acc_norm\": 0.9585492227979274,\n \"acc_norm_stderr\": 0.01438543285747644\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.8179487179487179,\n \"acc_stderr\": 0.019565236782930893,\n \"acc_norm\": 0.8179487179487179,\n \"acc_norm_stderr\": 0.019565236782930893\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.45185185185185184,\n \"acc_stderr\": 0.030343862998512623,\n \"acc_norm\": 0.45185185185185184,\n \"acc_norm_stderr\": 0.030343862998512623\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.8445378151260504,\n \"acc_stderr\": 0.023536818625398897,\n \"acc_norm\": 0.8445378151260504,\n \"acc_norm_stderr\": 0.023536818625398897\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.4768211920529801,\n \"acc_stderr\": 0.04078093859163083,\n \"acc_norm\": 0.4768211920529801,\n \"acc_norm_stderr\": 0.04078093859163083\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.9211009174311927,\n \"acc_stderr\": 0.0115581981137696,\n \"acc_norm\": 0.9211009174311927,\n \"acc_norm_stderr\": 0.0115581981137696\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.6805555555555556,\n \"acc_stderr\": 0.03179876342176851,\n \"acc_norm\": 0.6805555555555556,\n \"acc_norm_stderr\": 0.03179876342176851\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.9117647058823529,\n \"acc_stderr\": 0.019907399791316945,\n \"acc_norm\": 0.9117647058823529,\n \"acc_norm_stderr\": 0.019907399791316945\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.9071729957805907,\n \"acc_stderr\": 0.01888975055095671,\n \"acc_norm\": 0.9071729957805907,\n \"acc_norm_stderr\": 0.01888975055095671\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7847533632286996,\n \"acc_stderr\": 0.02758406660220827,\n \"acc_norm\": 0.7847533632286996,\n \"acc_norm_stderr\": 0.02758406660220827\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8702290076335878,\n \"acc_stderr\": 0.029473649496907065,\n \"acc_norm\": 0.8702290076335878,\n \"acc_norm_stderr\": 0.029473649496907065\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.9008264462809917,\n \"acc_stderr\": 0.027285246312758957,\n \"acc_norm\": 0.9008264462809917,\n \"acc_norm_stderr\": 0.027285246312758957\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8981481481481481,\n \"acc_stderr\": 0.029239272675632748,\n \"acc_norm\": 0.8981481481481481,\n \"acc_norm_stderr\": 0.029239272675632748\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.8650306748466258,\n \"acc_stderr\": 0.026845765054553838,\n \"acc_norm\": 0.8650306748466258,\n \"acc_norm_stderr\": 0.026845765054553838\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5357142857142857,\n \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.5357142857142857,\n \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8543689320388349,\n \"acc_stderr\": 0.034926064766237906,\n \"acc_norm\": 0.8543689320388349,\n \"acc_norm_stderr\": 0.034926064766237906\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9444444444444444,\n \"acc_stderr\": 0.015006312806446912,\n \"acc_norm\": 0.9444444444444444,\n \"acc_norm_stderr\": 0.015006312806446912\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.9,\n \"acc_stderr\": 0.030151134457776334,\n \"acc_norm\": 0.9,\n \"acc_norm_stderr\": 0.030151134457776334\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.9042145593869731,\n \"acc_stderr\": 0.010524031079055831,\n \"acc_norm\": 0.9042145593869731,\n \"acc_norm_stderr\": 0.010524031079055831\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.8121387283236994,\n \"acc_stderr\": 0.021029269752423203,\n \"acc_norm\": 0.8121387283236994,\n \"acc_norm_stderr\": 0.021029269752423203\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.7854748603351955,\n \"acc_stderr\": 0.013728923407828855,\n \"acc_norm\": 0.7854748603351955,\n \"acc_norm_stderr\": 0.013728923407828855\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.8496732026143791,\n \"acc_stderr\": 0.020464175124332625,\n \"acc_norm\": 0.8496732026143791,\n \"acc_norm_stderr\": 0.020464175124332625\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.8102893890675241,\n \"acc_stderr\": 0.022268196258783228,\n \"acc_norm\": 0.8102893890675241,\n \"acc_norm_stderr\": 0.022268196258783228\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.8672839506172839,\n \"acc_stderr\": 0.018877353839571842,\n \"acc_norm\": 0.8672839506172839,\n \"acc_norm_stderr\": 0.018877353839571842\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.6276595744680851,\n \"acc_stderr\": 0.02883892147125145,\n \"acc_norm\": 0.6276595744680851,\n \"acc_norm_stderr\": 0.02883892147125145\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5788787483702738,\n \"acc_stderr\": 0.012610325733489905,\n \"acc_norm\": 0.5788787483702738,\n \"acc_norm_stderr\": 0.012610325733489905\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.8198529411764706,\n \"acc_stderr\": 0.02334516361654485,\n \"acc_norm\": 0.8198529411764706,\n \"acc_norm_stderr\": 0.02334516361654485\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.815359477124183,\n \"acc_stderr\": 0.015697029240757776,\n \"acc_norm\": 0.815359477124183,\n \"acc_norm_stderr\": 0.015697029240757776\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7363636363636363,\n \"acc_stderr\": 0.04220224692971987,\n \"acc_norm\": 0.7363636363636363,\n \"acc_norm_stderr\": 0.04220224692971987\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.8163265306122449,\n \"acc_stderr\": 0.02478907133200765,\n \"acc_norm\": 0.8163265306122449,\n \"acc_norm_stderr\": 0.02478907133200765\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8855721393034826,\n \"acc_stderr\": 0.022509345325101706,\n \"acc_norm\": 0.8855721393034826,\n \"acc_norm_stderr\": 0.022509345325101706\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.89,\n \"acc_stderr\": 0.03144660377352203,\n \"acc_norm\": 0.89,\n \"acc_norm_stderr\": 0.03144660377352203\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.572289156626506,\n \"acc_stderr\": 0.038515976837185335,\n \"acc_norm\": 0.572289156626506,\n \"acc_norm_stderr\": 0.038515976837185335\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8947368421052632,\n \"acc_stderr\": 0.023537557657892567,\n \"acc_norm\": 0.8947368421052632,\n \"acc_norm_stderr\": 0.023537557657892567\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5887392900856793,\n \"mc1_stderr\": 0.01722562708366087,\n \"mc2\": 0.7354905615781797,\n \"mc2_stderr\": 0.014104277111112697\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.829518547750592,\n \"acc_stderr\": 0.0105690211228259\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6004548900682335,\n \"acc_stderr\": 0.013491660298815985\n }\n}\n```", "repo_url": "https://huggingface.co/one-man-army/UNA-34Beagles-32K-bf16-v1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|arc:challenge|25_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|gsm8k|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hellaswag|10_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T18-01-24.840782.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["**/details_harness|winogrande|5_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-14T18-01-24.840782.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_14T18_01_24.840782", "path": ["results_2024-01-14T18-01-24.840782.parquet"]}, {"split": "latest", "path": ["results_2024-01-14T18-01-24.840782.parquet"]}]}]} | 2024-01-14T18:04:02+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of one-man-army/UNA-34Beagles-32K-bf16-v1
Dataset automatically created during the evaluation run of model one-man-army/UNA-34Beagles-32K-bf16-v1 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-14T18:01:24.840782(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of one-man-army/UNA-34Beagles-32K-bf16-v1\n\n\n\nDataset automatically created during the evaluation run of model one-man-army/UNA-34Beagles-32K-bf16-v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T18:01:24.840782(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of one-man-army/UNA-34Beagles-32K-bf16-v1\n\n\n\nDataset automatically created during the evaluation run of model one-man-army/UNA-34Beagles-32K-bf16-v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T18:01:24.840782(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
5ed262282cbe1119ce881f0e8b25206c205e9345 |
# Dataset Card for Evaluation run of argilla/distilabeled-Marcoro14-7B-slerp-full
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [argilla/distilabeled-Marcoro14-7B-slerp-full](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp-full) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_argilla__distilabeled-Marcoro14-7B-slerp-full",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-14T18:07:10.931926](https://huggingface.co/datasets/open-llm-leaderboard/details_argilla__distilabeled-Marcoro14-7B-slerp-full/blob/main/results_2024-01-14T18-07-10.931926.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6579983930115316,
"acc_stderr": 0.031959390460197495,
"acc_norm": 0.6579231845624166,
"acc_norm_stderr": 0.03261951935121804,
"mc1": 0.48225214198286415,
"mc1_stderr": 0.017492470843075363,
"mc2": 0.6421417472476668,
"mc2_stderr": 0.015159369575596757
},
"harness|arc:challenge|25": {
"acc": 0.6783276450511946,
"acc_stderr": 0.013650488084494166,
"acc_norm": 0.7064846416382252,
"acc_norm_stderr": 0.01330725044494111
},
"harness|hellaswag|10": {
"acc": 0.6974706233817964,
"acc_stderr": 0.004584144014654942,
"acc_norm": 0.8755228042222665,
"acc_norm_stderr": 0.0032945048075552286
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6814814814814815,
"acc_stderr": 0.04024778401977108,
"acc_norm": 0.6814814814814815,
"acc_norm_stderr": 0.04024778401977108
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6973684210526315,
"acc_stderr": 0.037385206761196686,
"acc_norm": 0.6973684210526315,
"acc_norm_stderr": 0.037385206761196686
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.63,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.63,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7283018867924528,
"acc_stderr": 0.027377706624670713,
"acc_norm": 0.7283018867924528,
"acc_norm_stderr": 0.027377706624670713
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.03476590104304134,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.03476590104304134
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6647398843930635,
"acc_stderr": 0.03599586301247077,
"acc_norm": 0.6647398843930635,
"acc_norm_stderr": 0.03599586301247077
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4411764705882353,
"acc_stderr": 0.049406356306056595,
"acc_norm": 0.4411764705882353,
"acc_norm_stderr": 0.049406356306056595
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932262,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932262
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5914893617021276,
"acc_stderr": 0.032134180267015755,
"acc_norm": 0.5914893617021276,
"acc_norm_stderr": 0.032134180267015755
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5087719298245614,
"acc_stderr": 0.04702880432049615,
"acc_norm": 0.5087719298245614,
"acc_norm_stderr": 0.04702880432049615
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5517241379310345,
"acc_stderr": 0.04144311810878152,
"acc_norm": 0.5517241379310345,
"acc_norm_stderr": 0.04144311810878152
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.43386243386243384,
"acc_stderr": 0.02552503438247489,
"acc_norm": 0.43386243386243384,
"acc_norm_stderr": 0.02552503438247489
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4365079365079365,
"acc_stderr": 0.04435932892851466,
"acc_norm": 0.4365079365079365,
"acc_norm_stderr": 0.04435932892851466
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7903225806451613,
"acc_stderr": 0.023157879349083525,
"acc_norm": 0.7903225806451613,
"acc_norm_stderr": 0.023157879349083525
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4975369458128079,
"acc_stderr": 0.03517945038691063,
"acc_norm": 0.4975369458128079,
"acc_norm_stderr": 0.03517945038691063
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7575757575757576,
"acc_stderr": 0.03346409881055953,
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.03346409881055953
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.797979797979798,
"acc_stderr": 0.02860620428922987,
"acc_norm": 0.797979797979798,
"acc_norm_stderr": 0.02860620428922987
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9067357512953368,
"acc_stderr": 0.02098685459328974,
"acc_norm": 0.9067357512953368,
"acc_norm_stderr": 0.02098685459328974
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6743589743589744,
"acc_stderr": 0.02375966576741229,
"acc_norm": 0.6743589743589744,
"acc_norm_stderr": 0.02375966576741229
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.35185185185185186,
"acc_stderr": 0.029116617606083008,
"acc_norm": 0.35185185185185186,
"acc_norm_stderr": 0.029116617606083008
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.030388353551886786,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.030388353551886786
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3576158940397351,
"acc_stderr": 0.03913453431177258,
"acc_norm": 0.3576158940397351,
"acc_norm_stderr": 0.03913453431177258
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8532110091743119,
"acc_stderr": 0.015173141845126243,
"acc_norm": 0.8532110091743119,
"acc_norm_stderr": 0.015173141845126243
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5370370370370371,
"acc_stderr": 0.03400603625538271,
"acc_norm": 0.5370370370370371,
"acc_norm_stderr": 0.03400603625538271
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8578431372549019,
"acc_stderr": 0.0245098039215686,
"acc_norm": 0.8578431372549019,
"acc_norm_stderr": 0.0245098039215686
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7974683544303798,
"acc_stderr": 0.026160568246601443,
"acc_norm": 0.7974683544303798,
"acc_norm_stderr": 0.026160568246601443
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6995515695067265,
"acc_stderr": 0.03076935200822914,
"acc_norm": 0.6995515695067265,
"acc_norm_stderr": 0.03076935200822914
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7862595419847328,
"acc_stderr": 0.0359546161177469,
"acc_norm": 0.7862595419847328,
"acc_norm_stderr": 0.0359546161177469
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7768595041322314,
"acc_stderr": 0.03800754475228732,
"acc_norm": 0.7768595041322314,
"acc_norm_stderr": 0.03800754475228732
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.0395783547198098,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.0395783547198098
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7607361963190185,
"acc_stderr": 0.033519538795212696,
"acc_norm": 0.7607361963190185,
"acc_norm_stderr": 0.033519538795212696
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.45535714285714285,
"acc_stderr": 0.047268355537191,
"acc_norm": 0.45535714285714285,
"acc_norm_stderr": 0.047268355537191
},
"harness|hendrycksTest-management|5": {
"acc": 0.7766990291262136,
"acc_stderr": 0.04123553189891431,
"acc_norm": 0.7766990291262136,
"acc_norm_stderr": 0.04123553189891431
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8888888888888888,
"acc_stderr": 0.020588491316092368,
"acc_norm": 0.8888888888888888,
"acc_norm_stderr": 0.020588491316092368
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8288633461047255,
"acc_stderr": 0.013468201614066307,
"acc_norm": 0.8288633461047255,
"acc_norm_stderr": 0.013468201614066307
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7456647398843931,
"acc_stderr": 0.023445826276545543,
"acc_norm": 0.7456647398843931,
"acc_norm_stderr": 0.023445826276545543
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.44581005586592176,
"acc_stderr": 0.016623998513333103,
"acc_norm": 0.44581005586592176,
"acc_norm_stderr": 0.016623998513333103
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7189542483660131,
"acc_stderr": 0.025738854797818737,
"acc_norm": 0.7189542483660131,
"acc_norm_stderr": 0.025738854797818737
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7106109324758842,
"acc_stderr": 0.025755865922632945,
"acc_norm": 0.7106109324758842,
"acc_norm_stderr": 0.025755865922632945
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7561728395061729,
"acc_stderr": 0.023891879541959603,
"acc_norm": 0.7561728395061729,
"acc_norm_stderr": 0.023891879541959603
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5283687943262412,
"acc_stderr": 0.02977945095730305,
"acc_norm": 0.5283687943262412,
"acc_norm_stderr": 0.02977945095730305
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4654498044328553,
"acc_stderr": 0.012739711554045699,
"acc_norm": 0.4654498044328553,
"acc_norm_stderr": 0.012739711554045699
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6875,
"acc_stderr": 0.02815637344037142,
"acc_norm": 0.6875,
"acc_norm_stderr": 0.02815637344037142
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.684640522875817,
"acc_stderr": 0.01879808628488689,
"acc_norm": 0.684640522875817,
"acc_norm_stderr": 0.01879808628488689
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6909090909090909,
"acc_stderr": 0.044262946482000985,
"acc_norm": 0.6909090909090909,
"acc_norm_stderr": 0.044262946482000985
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7387755102040816,
"acc_stderr": 0.028123429335142783,
"acc_norm": 0.7387755102040816,
"acc_norm_stderr": 0.028123429335142783
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8407960199004975,
"acc_stderr": 0.02587064676616914,
"acc_norm": 0.8407960199004975,
"acc_norm_stderr": 0.02587064676616914
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.0358870281282637,
"acc_norm": 0.85,
"acc_norm_stderr": 0.0358870281282637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5602409638554217,
"acc_stderr": 0.03864139923699122,
"acc_norm": 0.5602409638554217,
"acc_norm_stderr": 0.03864139923699122
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8421052631578947,
"acc_stderr": 0.027966785859160893,
"acc_norm": 0.8421052631578947,
"acc_norm_stderr": 0.027966785859160893
},
"harness|truthfulqa:mc|0": {
"mc1": 0.48225214198286415,
"mc1_stderr": 0.017492470843075363,
"mc2": 0.6421417472476668,
"mc2_stderr": 0.015159369575596757
},
"harness|winogrande|5": {
"acc": 0.8200473559589582,
"acc_stderr": 0.01079646868806868
},
"harness|gsm8k|5": {
"acc": 0.7065959059893859,
"acc_stderr": 0.01254183081546149
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_argilla__distilabeled-Marcoro14-7B-slerp-full | [
"region:us"
] | 2024-01-14T18:09:48+00:00 | {"pretty_name": "Evaluation run of argilla/distilabeled-Marcoro14-7B-slerp-full", "dataset_summary": "Dataset automatically created during the evaluation run of model [argilla/distilabeled-Marcoro14-7B-slerp-full](https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp-full) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_argilla__distilabeled-Marcoro14-7B-slerp-full\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-14T18:07:10.931926](https://huggingface.co/datasets/open-llm-leaderboard/details_argilla__distilabeled-Marcoro14-7B-slerp-full/blob/main/results_2024-01-14T18-07-10.931926.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6579983930115316,\n \"acc_stderr\": 0.031959390460197495,\n \"acc_norm\": 0.6579231845624166,\n \"acc_norm_stderr\": 0.03261951935121804,\n \"mc1\": 0.48225214198286415,\n \"mc1_stderr\": 0.017492470843075363,\n \"mc2\": 0.6421417472476668,\n \"mc2_stderr\": 0.015159369575596757\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6783276450511946,\n \"acc_stderr\": 0.013650488084494166,\n \"acc_norm\": 0.7064846416382252,\n \"acc_norm_stderr\": 0.01330725044494111\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6974706233817964,\n \"acc_stderr\": 0.004584144014654942,\n \"acc_norm\": 0.8755228042222665,\n \"acc_norm_stderr\": 0.0032945048075552286\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6814814814814815,\n \"acc_stderr\": 0.04024778401977108,\n \"acc_norm\": 0.6814814814814815,\n \"acc_norm_stderr\": 0.04024778401977108\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6973684210526315,\n \"acc_stderr\": 0.037385206761196686,\n \"acc_norm\": 0.6973684210526315,\n \"acc_norm_stderr\": 0.037385206761196686\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.63,\n \"acc_stderr\": 0.048523658709391,\n \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.048523658709391\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7283018867924528,\n \"acc_stderr\": 0.027377706624670713,\n \"acc_norm\": 0.7283018867924528,\n \"acc_norm_stderr\": 0.027377706624670713\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.03476590104304134,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.03476590104304134\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6647398843930635,\n \"acc_stderr\": 0.03599586301247077,\n \"acc_norm\": 0.6647398843930635,\n \"acc_norm_stderr\": 0.03599586301247077\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.049406356306056595,\n \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.049406356306056595\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932262,\n \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932262\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5914893617021276,\n \"acc_stderr\": 0.032134180267015755,\n \"acc_norm\": 0.5914893617021276,\n \"acc_norm_stderr\": 0.032134180267015755\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5087719298245614,\n \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.5087719298245614,\n \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5517241379310345,\n \"acc_stderr\": 0.04144311810878152,\n \"acc_norm\": 0.5517241379310345,\n \"acc_norm_stderr\": 0.04144311810878152\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.43386243386243384,\n \"acc_stderr\": 0.02552503438247489,\n \"acc_norm\": 0.43386243386243384,\n \"acc_norm_stderr\": 0.02552503438247489\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4365079365079365,\n \"acc_stderr\": 0.04435932892851466,\n \"acc_norm\": 0.4365079365079365,\n \"acc_norm_stderr\": 0.04435932892851466\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7903225806451613,\n \"acc_stderr\": 0.023157879349083525,\n \"acc_norm\": 0.7903225806451613,\n \"acc_norm_stderr\": 0.023157879349083525\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4975369458128079,\n \"acc_stderr\": 0.03517945038691063,\n \"acc_norm\": 0.4975369458128079,\n \"acc_norm_stderr\": 0.03517945038691063\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7575757575757576,\n \"acc_stderr\": 0.03346409881055953,\n \"acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.03346409881055953\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.797979797979798,\n \"acc_stderr\": 0.02860620428922987,\n \"acc_norm\": 0.797979797979798,\n \"acc_norm_stderr\": 0.02860620428922987\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9067357512953368,\n \"acc_stderr\": 0.02098685459328974,\n \"acc_norm\": 0.9067357512953368,\n \"acc_norm_stderr\": 0.02098685459328974\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6743589743589744,\n \"acc_stderr\": 0.02375966576741229,\n \"acc_norm\": 0.6743589743589744,\n \"acc_norm_stderr\": 0.02375966576741229\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.35185185185185186,\n \"acc_stderr\": 0.029116617606083008,\n \"acc_norm\": 0.35185185185185186,\n \"acc_norm_stderr\": 0.029116617606083008\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.030388353551886786,\n \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.030388353551886786\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8532110091743119,\n \"acc_stderr\": 0.015173141845126243,\n \"acc_norm\": 0.8532110091743119,\n \"acc_norm_stderr\": 0.015173141845126243\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5370370370370371,\n \"acc_stderr\": 0.03400603625538271,\n \"acc_norm\": 0.5370370370370371,\n \"acc_norm_stderr\": 0.03400603625538271\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8578431372549019,\n \"acc_stderr\": 0.0245098039215686,\n \"acc_norm\": 0.8578431372549019,\n \"acc_norm_stderr\": 0.0245098039215686\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7974683544303798,\n \"acc_stderr\": 0.026160568246601443,\n \"acc_norm\": 0.7974683544303798,\n \"acc_norm_stderr\": 0.026160568246601443\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6995515695067265,\n \"acc_stderr\": 0.03076935200822914,\n \"acc_norm\": 0.6995515695067265,\n \"acc_norm_stderr\": 0.03076935200822914\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7862595419847328,\n \"acc_stderr\": 0.0359546161177469,\n \"acc_norm\": 0.7862595419847328,\n \"acc_norm_stderr\": 0.0359546161177469\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228732,\n \"acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228732\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n \"acc_stderr\": 0.0395783547198098,\n \"acc_norm\": 0.7870370370370371,\n \"acc_norm_stderr\": 0.0395783547198098\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7607361963190185,\n \"acc_stderr\": 0.033519538795212696,\n \"acc_norm\": 0.7607361963190185,\n \"acc_norm_stderr\": 0.033519538795212696\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n \"acc_stderr\": 0.047268355537191,\n \"acc_norm\": 0.45535714285714285,\n \"acc_norm_stderr\": 0.047268355537191\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8888888888888888,\n \"acc_stderr\": 0.020588491316092368,\n \"acc_norm\": 0.8888888888888888,\n \"acc_norm_stderr\": 0.020588491316092368\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8288633461047255,\n \"acc_stderr\": 0.013468201614066307,\n \"acc_norm\": 0.8288633461047255,\n \"acc_norm_stderr\": 0.013468201614066307\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7456647398843931,\n \"acc_stderr\": 0.023445826276545543,\n \"acc_norm\": 0.7456647398843931,\n \"acc_norm_stderr\": 0.023445826276545543\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.44581005586592176,\n \"acc_stderr\": 0.016623998513333103,\n \"acc_norm\": 0.44581005586592176,\n \"acc_norm_stderr\": 0.016623998513333103\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7189542483660131,\n \"acc_stderr\": 0.025738854797818737,\n \"acc_norm\": 0.7189542483660131,\n \"acc_norm_stderr\": 0.025738854797818737\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7106109324758842,\n \"acc_stderr\": 0.025755865922632945,\n \"acc_norm\": 0.7106109324758842,\n \"acc_norm_stderr\": 0.025755865922632945\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7561728395061729,\n \"acc_stderr\": 0.023891879541959603,\n \"acc_norm\": 0.7561728395061729,\n \"acc_norm_stderr\": 0.023891879541959603\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.5283687943262412,\n \"acc_stderr\": 0.02977945095730305,\n \"acc_norm\": 0.5283687943262412,\n \"acc_norm_stderr\": 0.02977945095730305\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4654498044328553,\n \"acc_stderr\": 0.012739711554045699,\n \"acc_norm\": 0.4654498044328553,\n \"acc_norm_stderr\": 0.012739711554045699\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6875,\n \"acc_stderr\": 0.02815637344037142,\n \"acc_norm\": 0.6875,\n \"acc_norm_stderr\": 0.02815637344037142\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.684640522875817,\n \"acc_stderr\": 0.01879808628488689,\n \"acc_norm\": 0.684640522875817,\n \"acc_norm_stderr\": 0.01879808628488689\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6909090909090909,\n \"acc_stderr\": 0.044262946482000985,\n \"acc_norm\": 0.6909090909090909,\n \"acc_norm_stderr\": 0.044262946482000985\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.028123429335142783,\n \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.028123429335142783\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n \"acc_stderr\": 0.02587064676616914,\n \"acc_norm\": 0.8407960199004975,\n \"acc_norm_stderr\": 0.02587064676616914\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.85,\n \"acc_stderr\": 0.0358870281282637,\n \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.0358870281282637\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5602409638554217,\n \"acc_stderr\": 0.03864139923699122,\n \"acc_norm\": 0.5602409638554217,\n \"acc_norm_stderr\": 0.03864139923699122\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.027966785859160893,\n \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.027966785859160893\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.48225214198286415,\n \"mc1_stderr\": 0.017492470843075363,\n \"mc2\": 0.6421417472476668,\n \"mc2_stderr\": 0.015159369575596757\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8200473559589582,\n \"acc_stderr\": 0.01079646868806868\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7065959059893859,\n \"acc_stderr\": 0.01254183081546149\n }\n}\n```", "repo_url": "https://huggingface.co/argilla/distilabeled-Marcoro14-7B-slerp-full", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|arc:challenge|25_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|gsm8k|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hellaswag|10_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T18-07-10.931926.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["**/details_harness|winogrande|5_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-14T18-07-10.931926.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_14T18_07_10.931926", "path": ["results_2024-01-14T18-07-10.931926.parquet"]}, {"split": "latest", "path": ["results_2024-01-14T18-07-10.931926.parquet"]}]}]} | 2024-01-14T18:10:29+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of argilla/distilabeled-Marcoro14-7B-slerp-full
Dataset automatically created during the evaluation run of model argilla/distilabeled-Marcoro14-7B-slerp-full on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-14T18:07:10.931926(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of argilla/distilabeled-Marcoro14-7B-slerp-full\n\n\n\nDataset automatically created during the evaluation run of model argilla/distilabeled-Marcoro14-7B-slerp-full on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T18:07:10.931926(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of argilla/distilabeled-Marcoro14-7B-slerp-full\n\n\n\nDataset automatically created during the evaluation run of model argilla/distilabeled-Marcoro14-7B-slerp-full on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T18:07:10.931926(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
e7d09ab28922fc32dde9eec300c655ec5a5140da |
# Dataset Card for Evaluation run of kz919/mistral-7b-dpo-open-orca-flan-50k-synthetic-5-models
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [kz919/mistral-7b-dpo-open-orca-flan-50k-synthetic-5-models](https://huggingface.co/kz919/mistral-7b-dpo-open-orca-flan-50k-synthetic-5-models) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_kz919__mistral-7b-dpo-open-orca-flan-50k-synthetic-5-models",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-14T18:15:50.698529](https://huggingface.co/datasets/open-llm-leaderboard/details_kz919__mistral-7b-dpo-open-orca-flan-50k-synthetic-5-models/blob/main/results_2024-01-14T18-15-50.698529.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.26648871501929594,
"acc_stderr": 0.03093030883128489,
"acc_norm": 0.2677809133729311,
"acc_norm_stderr": 0.03175527446298885,
"mc1": 0.2521419828641371,
"mc1_stderr": 0.015201522246299953,
"mc2": 0.4880571743853537,
"mc2_stderr": 0.0172850771661607
},
"harness|arc:challenge|25": {
"acc": 0.20819112627986347,
"acc_stderr": 0.011864866118448064,
"acc_norm": 0.2551194539249147,
"acc_norm_stderr": 0.012739038695202105
},
"harness|hellaswag|10": {
"acc": 0.25692093208524197,
"acc_stderr": 0.004360424536145122,
"acc_norm": 0.2552280422226648,
"acc_norm_stderr": 0.004350982826580604
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.21,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.21,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.22962962962962963,
"acc_stderr": 0.03633384414073461,
"acc_norm": 0.22962962962962963,
"acc_norm_stderr": 0.03633384414073461
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.34868421052631576,
"acc_stderr": 0.03878139888797611,
"acc_norm": 0.34868421052631576,
"acc_norm_stderr": 0.03878139888797611
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.2,
"acc_stderr": 0.04020151261036844,
"acc_norm": 0.2,
"acc_norm_stderr": 0.04020151261036844
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.2981132075471698,
"acc_stderr": 0.02815283794249386,
"acc_norm": 0.2981132075471698,
"acc_norm_stderr": 0.02815283794249386
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2569444444444444,
"acc_stderr": 0.03653946969442099,
"acc_norm": 0.2569444444444444,
"acc_norm_stderr": 0.03653946969442099
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.38,
"acc_stderr": 0.04878317312145632,
"acc_norm": 0.38,
"acc_norm_stderr": 0.04878317312145632
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.32947976878612717,
"acc_stderr": 0.03583901754736411,
"acc_norm": 0.32947976878612717,
"acc_norm_stderr": 0.03583901754736411
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.37254901960784315,
"acc_stderr": 0.04810840148082633,
"acc_norm": 0.37254901960784315,
"acc_norm_stderr": 0.04810840148082633
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.18,
"acc_stderr": 0.03861229196653694,
"acc_norm": 0.18,
"acc_norm_stderr": 0.03861229196653694
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.23829787234042554,
"acc_stderr": 0.027851252973889774,
"acc_norm": 0.23829787234042554,
"acc_norm_stderr": 0.027851252973889774
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.24561403508771928,
"acc_stderr": 0.04049339297748141,
"acc_norm": 0.24561403508771928,
"acc_norm_stderr": 0.04049339297748141
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.2689655172413793,
"acc_stderr": 0.036951833116502325,
"acc_norm": 0.2689655172413793,
"acc_norm_stderr": 0.036951833116502325
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.2619047619047619,
"acc_stderr": 0.022644212615525218,
"acc_norm": 0.2619047619047619,
"acc_norm_stderr": 0.022644212615525218
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.2857142857142857,
"acc_stderr": 0.04040610178208841,
"acc_norm": 0.2857142857142857,
"acc_norm_stderr": 0.04040610178208841
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.18,
"acc_stderr": 0.03861229196653694,
"acc_norm": 0.18,
"acc_norm_stderr": 0.03861229196653694
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.31290322580645163,
"acc_stderr": 0.026377567028645854,
"acc_norm": 0.31290322580645163,
"acc_norm_stderr": 0.026377567028645854
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.270935960591133,
"acc_stderr": 0.031270907132976984,
"acc_norm": 0.270935960591133,
"acc_norm_stderr": 0.031270907132976984
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.17,
"acc_stderr": 0.0377525168068637,
"acc_norm": 0.17,
"acc_norm_stderr": 0.0377525168068637
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.2545454545454545,
"acc_stderr": 0.03401506715249039,
"acc_norm": 0.2545454545454545,
"acc_norm_stderr": 0.03401506715249039
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.35353535353535354,
"acc_stderr": 0.03406086723547153,
"acc_norm": 0.35353535353535354,
"acc_norm_stderr": 0.03406086723547153
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.35233160621761656,
"acc_stderr": 0.03447478286414359,
"acc_norm": 0.35233160621761656,
"acc_norm_stderr": 0.03447478286414359
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.3435897435897436,
"acc_stderr": 0.02407869658063547,
"acc_norm": 0.3435897435897436,
"acc_norm_stderr": 0.02407869658063547
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.26296296296296295,
"acc_stderr": 0.026842057873833706,
"acc_norm": 0.26296296296296295,
"acc_norm_stderr": 0.026842057873833706
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.35294117647058826,
"acc_stderr": 0.031041941304059285,
"acc_norm": 0.35294117647058826,
"acc_norm_stderr": 0.031041941304059285
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.32450331125827814,
"acc_stderr": 0.03822746937658754,
"acc_norm": 0.32450331125827814,
"acc_norm_stderr": 0.03822746937658754
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.3559633027522936,
"acc_stderr": 0.020528559278244218,
"acc_norm": 0.3559633027522936,
"acc_norm_stderr": 0.020528559278244218
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.03388857118502325,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.03388857118502325
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.25,
"acc_stderr": 0.03039153369274154,
"acc_norm": 0.25,
"acc_norm_stderr": 0.03039153369274154
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.20675105485232068,
"acc_stderr": 0.026361651668389104,
"acc_norm": 0.20675105485232068,
"acc_norm_stderr": 0.026361651668389104
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.12556053811659193,
"acc_stderr": 0.02223898546932376,
"acc_norm": 0.12556053811659193,
"acc_norm_stderr": 0.02223898546932376
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.2824427480916031,
"acc_stderr": 0.03948406125768361,
"acc_norm": 0.2824427480916031,
"acc_norm_stderr": 0.03948406125768361
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.14049586776859505,
"acc_stderr": 0.03172233426002161,
"acc_norm": 0.14049586776859505,
"acc_norm_stderr": 0.03172233426002161
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.2222222222222222,
"acc_stderr": 0.040191074725573483,
"acc_norm": 0.2222222222222222,
"acc_norm_stderr": 0.040191074725573483
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.2392638036809816,
"acc_stderr": 0.033519538795212696,
"acc_norm": 0.2392638036809816,
"acc_norm_stderr": 0.033519538795212696
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.15178571428571427,
"acc_stderr": 0.03405702838185694,
"acc_norm": 0.15178571428571427,
"acc_norm_stderr": 0.03405702838185694
},
"harness|hendrycksTest-management|5": {
"acc": 0.3786407766990291,
"acc_stderr": 0.04802694698258972,
"acc_norm": 0.3786407766990291,
"acc_norm_stderr": 0.04802694698258972
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.19230769230769232,
"acc_stderr": 0.025819233256483706,
"acc_norm": 0.19230769230769232,
"acc_norm_stderr": 0.025819233256483706
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.20561941251596424,
"acc_stderr": 0.014452500456785825,
"acc_norm": 0.20561941251596424,
"acc_norm_stderr": 0.014452500456785825
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.21965317919075145,
"acc_stderr": 0.022289638852617904,
"acc_norm": 0.21965317919075145,
"acc_norm_stderr": 0.022289638852617904
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.27262569832402234,
"acc_stderr": 0.014893391735249588,
"acc_norm": 0.27262569832402234,
"acc_norm_stderr": 0.014893391735249588
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.3006535947712418,
"acc_stderr": 0.02625605383571896,
"acc_norm": 0.3006535947712418,
"acc_norm_stderr": 0.02625605383571896
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.24758842443729903,
"acc_stderr": 0.024513879973621967,
"acc_norm": 0.24758842443729903,
"acc_norm_stderr": 0.024513879973621967
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.22530864197530864,
"acc_stderr": 0.023246202647819746,
"acc_norm": 0.22530864197530864,
"acc_norm_stderr": 0.023246202647819746
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.25177304964539005,
"acc_stderr": 0.0258921511567094,
"acc_norm": 0.25177304964539005,
"acc_norm_stderr": 0.0258921511567094
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.23859191655801826,
"acc_stderr": 0.010885929742002221,
"acc_norm": 0.23859191655801826,
"acc_norm_stderr": 0.010885929742002221
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.4375,
"acc_stderr": 0.030134614954403924,
"acc_norm": 0.4375,
"acc_norm_stderr": 0.030134614954403924
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.21405228758169934,
"acc_stderr": 0.01659342966232903,
"acc_norm": 0.21405228758169934,
"acc_norm_stderr": 0.01659342966232903
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.24545454545454545,
"acc_stderr": 0.041220665028782834,
"acc_norm": 0.24545454545454545,
"acc_norm_stderr": 0.041220665028782834
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.39591836734693875,
"acc_stderr": 0.03130802899065686,
"acc_norm": 0.39591836734693875,
"acc_norm_stderr": 0.03130802899065686
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.263681592039801,
"acc_stderr": 0.03115715086935556,
"acc_norm": 0.263681592039801,
"acc_norm_stderr": 0.03115715086935556
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.22,
"acc_stderr": 0.041633319989322674,
"acc_norm": 0.22,
"acc_norm_stderr": 0.041633319989322674
},
"harness|hendrycksTest-virology|5": {
"acc": 0.21084337349397592,
"acc_stderr": 0.03175554786629921,
"acc_norm": 0.21084337349397592,
"acc_norm_stderr": 0.03175554786629921
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.17543859649122806,
"acc_stderr": 0.029170885500727654,
"acc_norm": 0.17543859649122806,
"acc_norm_stderr": 0.029170885500727654
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2521419828641371,
"mc1_stderr": 0.015201522246299953,
"mc2": 0.4880571743853537,
"mc2_stderr": 0.0172850771661607
},
"harness|winogrande|5": {
"acc": 0.5019731649565904,
"acc_stderr": 0.014052376259225636
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_kz919__mistral-7b-dpo-open-orca-flan-50k-synthetic-5-models | [
"region:us"
] | 2024-01-14T18:18:11+00:00 | {"pretty_name": "Evaluation run of kz919/mistral-7b-dpo-open-orca-flan-50k-synthetic-5-models", "dataset_summary": "Dataset automatically created during the evaluation run of model [kz919/mistral-7b-dpo-open-orca-flan-50k-synthetic-5-models](https://huggingface.co/kz919/mistral-7b-dpo-open-orca-flan-50k-synthetic-5-models) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_kz919__mistral-7b-dpo-open-orca-flan-50k-synthetic-5-models\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-14T18:15:50.698529](https://huggingface.co/datasets/open-llm-leaderboard/details_kz919__mistral-7b-dpo-open-orca-flan-50k-synthetic-5-models/blob/main/results_2024-01-14T18-15-50.698529.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.26648871501929594,\n \"acc_stderr\": 0.03093030883128489,\n \"acc_norm\": 0.2677809133729311,\n \"acc_norm_stderr\": 0.03175527446298885,\n \"mc1\": 0.2521419828641371,\n \"mc1_stderr\": 0.015201522246299953,\n \"mc2\": 0.4880571743853537,\n \"mc2_stderr\": 0.0172850771661607\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.20819112627986347,\n \"acc_stderr\": 0.011864866118448064,\n \"acc_norm\": 0.2551194539249147,\n \"acc_norm_stderr\": 0.012739038695202105\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.25692093208524197,\n \"acc_stderr\": 0.004360424536145122,\n \"acc_norm\": 0.2552280422226648,\n \"acc_norm_stderr\": 0.004350982826580604\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.21,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.21,\n \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.22962962962962963,\n \"acc_stderr\": 0.03633384414073461,\n \"acc_norm\": 0.22962962962962963,\n \"acc_norm_stderr\": 0.03633384414073461\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.34868421052631576,\n \"acc_stderr\": 0.03878139888797611,\n \"acc_norm\": 0.34868421052631576,\n \"acc_norm_stderr\": 0.03878139888797611\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.2,\n \"acc_stderr\": 0.04020151261036844,\n \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.04020151261036844\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.2981132075471698,\n \"acc_stderr\": 0.02815283794249386,\n \"acc_norm\": 0.2981132075471698,\n \"acc_norm_stderr\": 0.02815283794249386\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2569444444444444,\n \"acc_stderr\": 0.03653946969442099,\n \"acc_norm\": 0.2569444444444444,\n \"acc_norm_stderr\": 0.03653946969442099\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.32947976878612717,\n \"acc_stderr\": 0.03583901754736411,\n \"acc_norm\": 0.32947976878612717,\n \"acc_norm_stderr\": 0.03583901754736411\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.37254901960784315,\n \"acc_stderr\": 0.04810840148082633,\n \"acc_norm\": 0.37254901960784315,\n \"acc_norm_stderr\": 0.04810840148082633\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.18,\n \"acc_stderr\": 0.03861229196653694,\n \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.03861229196653694\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.23829787234042554,\n \"acc_stderr\": 0.027851252973889774,\n \"acc_norm\": 0.23829787234042554,\n \"acc_norm_stderr\": 0.027851252973889774\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.24561403508771928,\n \"acc_stderr\": 0.04049339297748141,\n \"acc_norm\": 0.24561403508771928,\n \"acc_norm_stderr\": 0.04049339297748141\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.2689655172413793,\n \"acc_stderr\": 0.036951833116502325,\n \"acc_norm\": 0.2689655172413793,\n \"acc_norm_stderr\": 0.036951833116502325\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.2619047619047619,\n \"acc_stderr\": 0.022644212615525218,\n \"acc_norm\": 0.2619047619047619,\n \"acc_norm_stderr\": 0.022644212615525218\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.2857142857142857,\n \"acc_stderr\": 0.04040610178208841,\n \"acc_norm\": 0.2857142857142857,\n \"acc_norm_stderr\": 0.04040610178208841\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.18,\n \"acc_stderr\": 0.03861229196653694,\n \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.03861229196653694\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.31290322580645163,\n \"acc_stderr\": 0.026377567028645854,\n \"acc_norm\": 0.31290322580645163,\n \"acc_norm_stderr\": 0.026377567028645854\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.270935960591133,\n \"acc_stderr\": 0.031270907132976984,\n \"acc_norm\": 0.270935960591133,\n \"acc_norm_stderr\": 0.031270907132976984\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.17,\n \"acc_stderr\": 0.0377525168068637,\n \"acc_norm\": 0.17,\n \"acc_norm_stderr\": 0.0377525168068637\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.2545454545454545,\n \"acc_stderr\": 0.03401506715249039,\n \"acc_norm\": 0.2545454545454545,\n \"acc_norm_stderr\": 0.03401506715249039\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.35353535353535354,\n \"acc_stderr\": 0.03406086723547153,\n \"acc_norm\": 0.35353535353535354,\n \"acc_norm_stderr\": 0.03406086723547153\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.35233160621761656,\n \"acc_stderr\": 0.03447478286414359,\n \"acc_norm\": 0.35233160621761656,\n \"acc_norm_stderr\": 0.03447478286414359\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.3435897435897436,\n \"acc_stderr\": 0.02407869658063547,\n \"acc_norm\": 0.3435897435897436,\n \"acc_norm_stderr\": 0.02407869658063547\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.26296296296296295,\n \"acc_stderr\": 0.026842057873833706,\n \"acc_norm\": 0.26296296296296295,\n \"acc_norm_stderr\": 0.026842057873833706\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.35294117647058826,\n \"acc_stderr\": 0.031041941304059285,\n \"acc_norm\": 0.35294117647058826,\n \"acc_norm_stderr\": 0.031041941304059285\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.32450331125827814,\n \"acc_stderr\": 0.03822746937658754,\n \"acc_norm\": 0.32450331125827814,\n \"acc_norm_stderr\": 0.03822746937658754\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.3559633027522936,\n \"acc_stderr\": 0.020528559278244218,\n \"acc_norm\": 0.3559633027522936,\n \"acc_norm_stderr\": 0.020528559278244218\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.03388857118502325,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.03388857118502325\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.03039153369274154,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.03039153369274154\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.20675105485232068,\n \"acc_stderr\": 0.026361651668389104,\n \"acc_norm\": 0.20675105485232068,\n \"acc_norm_stderr\": 0.026361651668389104\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.12556053811659193,\n \"acc_stderr\": 0.02223898546932376,\n \"acc_norm\": 0.12556053811659193,\n \"acc_norm_stderr\": 0.02223898546932376\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.2824427480916031,\n \"acc_stderr\": 0.03948406125768361,\n \"acc_norm\": 0.2824427480916031,\n \"acc_norm_stderr\": 0.03948406125768361\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.14049586776859505,\n \"acc_stderr\": 0.03172233426002161,\n \"acc_norm\": 0.14049586776859505,\n \"acc_norm_stderr\": 0.03172233426002161\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.2222222222222222,\n \"acc_stderr\": 0.040191074725573483,\n \"acc_norm\": 0.2222222222222222,\n \"acc_norm_stderr\": 0.040191074725573483\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.2392638036809816,\n \"acc_stderr\": 0.033519538795212696,\n \"acc_norm\": 0.2392638036809816,\n \"acc_norm_stderr\": 0.033519538795212696\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.15178571428571427,\n \"acc_stderr\": 0.03405702838185694,\n \"acc_norm\": 0.15178571428571427,\n \"acc_norm_stderr\": 0.03405702838185694\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.3786407766990291,\n \"acc_stderr\": 0.04802694698258972,\n \"acc_norm\": 0.3786407766990291,\n \"acc_norm_stderr\": 0.04802694698258972\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.19230769230769232,\n \"acc_stderr\": 0.025819233256483706,\n \"acc_norm\": 0.19230769230769232,\n \"acc_norm_stderr\": 0.025819233256483706\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.20561941251596424,\n \"acc_stderr\": 0.014452500456785825,\n \"acc_norm\": 0.20561941251596424,\n \"acc_norm_stderr\": 0.014452500456785825\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.21965317919075145,\n \"acc_stderr\": 0.022289638852617904,\n \"acc_norm\": 0.21965317919075145,\n \"acc_norm_stderr\": 0.022289638852617904\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.27262569832402234,\n \"acc_stderr\": 0.014893391735249588,\n \"acc_norm\": 0.27262569832402234,\n \"acc_norm_stderr\": 0.014893391735249588\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.3006535947712418,\n \"acc_stderr\": 0.02625605383571896,\n \"acc_norm\": 0.3006535947712418,\n \"acc_norm_stderr\": 0.02625605383571896\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.24758842443729903,\n \"acc_stderr\": 0.024513879973621967,\n \"acc_norm\": 0.24758842443729903,\n \"acc_norm_stderr\": 0.024513879973621967\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.22530864197530864,\n \"acc_stderr\": 0.023246202647819746,\n \"acc_norm\": 0.22530864197530864,\n \"acc_norm_stderr\": 0.023246202647819746\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.25177304964539005,\n \"acc_stderr\": 0.0258921511567094,\n \"acc_norm\": 0.25177304964539005,\n \"acc_norm_stderr\": 0.0258921511567094\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.23859191655801826,\n \"acc_stderr\": 0.010885929742002221,\n \"acc_norm\": 0.23859191655801826,\n \"acc_norm_stderr\": 0.010885929742002221\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.4375,\n \"acc_stderr\": 0.030134614954403924,\n \"acc_norm\": 0.4375,\n \"acc_norm_stderr\": 0.030134614954403924\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.21405228758169934,\n \"acc_stderr\": 0.01659342966232903,\n \"acc_norm\": 0.21405228758169934,\n \"acc_norm_stderr\": 0.01659342966232903\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.24545454545454545,\n \"acc_stderr\": 0.041220665028782834,\n \"acc_norm\": 0.24545454545454545,\n \"acc_norm_stderr\": 0.041220665028782834\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.39591836734693875,\n \"acc_stderr\": 0.03130802899065686,\n \"acc_norm\": 0.39591836734693875,\n \"acc_norm_stderr\": 0.03130802899065686\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.263681592039801,\n \"acc_stderr\": 0.03115715086935556,\n \"acc_norm\": 0.263681592039801,\n \"acc_norm_stderr\": 0.03115715086935556\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.22,\n \"acc_stderr\": 0.041633319989322674,\n \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.041633319989322674\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.21084337349397592,\n \"acc_stderr\": 0.03175554786629921,\n \"acc_norm\": 0.21084337349397592,\n \"acc_norm_stderr\": 0.03175554786629921\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.17543859649122806,\n \"acc_stderr\": 0.029170885500727654,\n \"acc_norm\": 0.17543859649122806,\n \"acc_norm_stderr\": 0.029170885500727654\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2521419828641371,\n \"mc1_stderr\": 0.015201522246299953,\n \"mc2\": 0.4880571743853537,\n \"mc2_stderr\": 0.0172850771661607\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5019731649565904,\n \"acc_stderr\": 0.014052376259225636\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/kz919/mistral-7b-dpo-open-orca-flan-50k-synthetic-5-models", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|arc:challenge|25_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|gsm8k|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hellaswag|10_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T18-15-50.698529.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["**/details_harness|winogrande|5_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-14T18-15-50.698529.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_14T18_15_50.698529", "path": ["results_2024-01-14T18-15-50.698529.parquet"]}, {"split": "latest", "path": ["results_2024-01-14T18-15-50.698529.parquet"]}]}]} | 2024-01-14T18:18:33+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of kz919/mistral-7b-dpo-open-orca-flan-50k-synthetic-5-models
Dataset automatically created during the evaluation run of model kz919/mistral-7b-dpo-open-orca-flan-50k-synthetic-5-models on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-14T18:15:50.698529(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of kz919/mistral-7b-dpo-open-orca-flan-50k-synthetic-5-models\n\n\n\nDataset automatically created during the evaluation run of model kz919/mistral-7b-dpo-open-orca-flan-50k-synthetic-5-models on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T18:15:50.698529(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of kz919/mistral-7b-dpo-open-orca-flan-50k-synthetic-5-models\n\n\n\nDataset automatically created during the evaluation run of model kz919/mistral-7b-dpo-open-orca-flan-50k-synthetic-5-models on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T18:15:50.698529(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
ff4150f5b27919f149c49ad2fb5133178647a7a8 |
# Dataset Card for Evaluation run of CallComply/openchat-3.5-0106-11b
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [CallComply/openchat-3.5-0106-11b](https://huggingface.co/CallComply/openchat-3.5-0106-11b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CallComply__openchat-3.5-0106-11b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-14T19:16:22.396289](https://huggingface.co/datasets/open-llm-leaderboard/details_CallComply__openchat-3.5-0106-11b/blob/main/results_2024-01-14T19-16-22.396289.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6221695918215556,
"acc_stderr": 0.032672062972624025,
"acc_norm": 0.6283243003334837,
"acc_norm_stderr": 0.033341783944514224,
"mc1": 0.31946144430844553,
"mc1_stderr": 0.016322644182960498,
"mc2": 0.4806689432668841,
"mc2_stderr": 0.014999748207355675
},
"harness|arc:challenge|25": {
"acc": 0.5981228668941979,
"acc_stderr": 0.014327268614578276,
"acc_norm": 0.636518771331058,
"acc_norm_stderr": 0.014056207319068285
},
"harness|hellaswag|10": {
"acc": 0.5804620593507269,
"acc_stderr": 0.00492474850063935,
"acc_norm": 0.7863971320454093,
"acc_norm_stderr": 0.004090119686697031
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.27,
"acc_stderr": 0.04461960433384739,
"acc_norm": 0.27,
"acc_norm_stderr": 0.04461960433384739
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6074074074074074,
"acc_stderr": 0.04218506215368879,
"acc_norm": 0.6074074074074074,
"acc_norm_stderr": 0.04218506215368879
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7105263157894737,
"acc_stderr": 0.03690677986137283,
"acc_norm": 0.7105263157894737,
"acc_norm_stderr": 0.03690677986137283
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.63,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.63,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7094339622641509,
"acc_stderr": 0.027943219989337135,
"acc_norm": 0.7094339622641509,
"acc_norm_stderr": 0.027943219989337135
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6875,
"acc_stderr": 0.038760854559127644,
"acc_norm": 0.6875,
"acc_norm_stderr": 0.038760854559127644
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.41,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.41,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6416184971098265,
"acc_stderr": 0.036563436533531585,
"acc_norm": 0.6416184971098265,
"acc_norm_stderr": 0.036563436533531585
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4019607843137255,
"acc_stderr": 0.04878608714466996,
"acc_norm": 0.4019607843137255,
"acc_norm_stderr": 0.04878608714466996
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.73,
"acc_stderr": 0.0446196043338474,
"acc_norm": 0.73,
"acc_norm_stderr": 0.0446196043338474
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5659574468085107,
"acc_stderr": 0.03240038086792747,
"acc_norm": 0.5659574468085107,
"acc_norm_stderr": 0.03240038086792747
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.42105263157894735,
"acc_stderr": 0.046446020912223177,
"acc_norm": 0.42105263157894735,
"acc_norm_stderr": 0.046446020912223177
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5862068965517241,
"acc_stderr": 0.04104269211806232,
"acc_norm": 0.5862068965517241,
"acc_norm_stderr": 0.04104269211806232
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4417989417989418,
"acc_stderr": 0.02557625706125383,
"acc_norm": 0.4417989417989418,
"acc_norm_stderr": 0.02557625706125383
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.46825396825396826,
"acc_stderr": 0.04463112720677173,
"acc_norm": 0.46825396825396826,
"acc_norm_stderr": 0.04463112720677173
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7645161290322581,
"acc_stderr": 0.02413763242933771,
"acc_norm": 0.7645161290322581,
"acc_norm_stderr": 0.02413763242933771
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.458128078817734,
"acc_stderr": 0.03505630140785741,
"acc_norm": 0.458128078817734,
"acc_norm_stderr": 0.03505630140785741
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001974,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001974
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7575757575757576,
"acc_stderr": 0.03346409881055953,
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.03346409881055953
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7272727272727273,
"acc_stderr": 0.03173071239071724,
"acc_norm": 0.7272727272727273,
"acc_norm_stderr": 0.03173071239071724
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8911917098445595,
"acc_stderr": 0.022473253332768776,
"acc_norm": 0.8911917098445595,
"acc_norm_stderr": 0.022473253332768776
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.023901157979402538,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.023901157979402538
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3074074074074074,
"acc_stderr": 0.02813325257881564,
"acc_norm": 0.3074074074074074,
"acc_norm_stderr": 0.02813325257881564
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6428571428571429,
"acc_stderr": 0.031124619309328177,
"acc_norm": 0.6428571428571429,
"acc_norm_stderr": 0.031124619309328177
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.31125827814569534,
"acc_stderr": 0.03780445850526732,
"acc_norm": 0.31125827814569534,
"acc_norm_stderr": 0.03780445850526732
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8146788990825689,
"acc_stderr": 0.01665927970029582,
"acc_norm": 0.8146788990825689,
"acc_norm_stderr": 0.01665927970029582
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4583333333333333,
"acc_stderr": 0.03398110890294636,
"acc_norm": 0.4583333333333333,
"acc_norm_stderr": 0.03398110890294636
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8088235294117647,
"acc_stderr": 0.027599174300640766,
"acc_norm": 0.8088235294117647,
"acc_norm_stderr": 0.027599174300640766
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8185654008438819,
"acc_stderr": 0.025085961144579654,
"acc_norm": 0.8185654008438819,
"acc_norm_stderr": 0.025085961144579654
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6995515695067265,
"acc_stderr": 0.030769352008229143,
"acc_norm": 0.6995515695067265,
"acc_norm_stderr": 0.030769352008229143
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7251908396946565,
"acc_stderr": 0.039153454088478354,
"acc_norm": 0.7251908396946565,
"acc_norm_stderr": 0.039153454088478354
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.037494924487096966,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.037494924487096966
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7592592592592593,
"acc_stderr": 0.04133119440243839,
"acc_norm": 0.7592592592592593,
"acc_norm_stderr": 0.04133119440243839
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7239263803680982,
"acc_stderr": 0.035123852837050475,
"acc_norm": 0.7239263803680982,
"acc_norm_stderr": 0.035123852837050475
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.49107142857142855,
"acc_stderr": 0.04745033255489123,
"acc_norm": 0.49107142857142855,
"acc_norm_stderr": 0.04745033255489123
},
"harness|hendrycksTest-management|5": {
"acc": 0.7766990291262136,
"acc_stderr": 0.04123553189891431,
"acc_norm": 0.7766990291262136,
"acc_norm_stderr": 0.04123553189891431
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8717948717948718,
"acc_stderr": 0.02190190511507333,
"acc_norm": 0.8717948717948718,
"acc_norm_stderr": 0.02190190511507333
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8288633461047255,
"acc_stderr": 0.013468201614066304,
"acc_norm": 0.8288633461047255,
"acc_norm_stderr": 0.013468201614066304
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6936416184971098,
"acc_stderr": 0.024818350129436596,
"acc_norm": 0.6936416184971098,
"acc_norm_stderr": 0.024818350129436596
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.26033519553072626,
"acc_stderr": 0.01467625200931947,
"acc_norm": 0.26033519553072626,
"acc_norm_stderr": 0.01467625200931947
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7124183006535948,
"acc_stderr": 0.02591780611714716,
"acc_norm": 0.7124183006535948,
"acc_norm_stderr": 0.02591780611714716
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6752411575562701,
"acc_stderr": 0.02659678228769704,
"acc_norm": 0.6752411575562701,
"acc_norm_stderr": 0.02659678228769704
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7438271604938271,
"acc_stderr": 0.024288533637726095,
"acc_norm": 0.7438271604938271,
"acc_norm_stderr": 0.024288533637726095
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4929078014184397,
"acc_stderr": 0.02982449855912901,
"acc_norm": 0.4929078014184397,
"acc_norm_stderr": 0.02982449855912901
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4589308996088657,
"acc_stderr": 0.012727084826799802,
"acc_norm": 0.4589308996088657,
"acc_norm_stderr": 0.012727084826799802
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6838235294117647,
"acc_stderr": 0.02824568739146292,
"acc_norm": 0.6838235294117647,
"acc_norm_stderr": 0.02824568739146292
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6356209150326797,
"acc_stderr": 0.019469518221573695,
"acc_norm": 0.6356209150326797,
"acc_norm_stderr": 0.019469518221573695
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6363636363636364,
"acc_stderr": 0.046075820907199756,
"acc_norm": 0.6363636363636364,
"acc_norm_stderr": 0.046075820907199756
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6938775510204082,
"acc_stderr": 0.029504896454595957,
"acc_norm": 0.6938775510204082,
"acc_norm_stderr": 0.029504896454595957
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7960199004975125,
"acc_stderr": 0.02849317624532607,
"acc_norm": 0.7960199004975125,
"acc_norm_stderr": 0.02849317624532607
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.82,
"acc_stderr": 0.038612291966536934,
"acc_norm": 0.82,
"acc_norm_stderr": 0.038612291966536934
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5060240963855421,
"acc_stderr": 0.03892212195333045,
"acc_norm": 0.5060240963855421,
"acc_norm_stderr": 0.03892212195333045
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8187134502923976,
"acc_stderr": 0.029547741687640038,
"acc_norm": 0.8187134502923976,
"acc_norm_stderr": 0.029547741687640038
},
"harness|truthfulqa:mc|0": {
"mc1": 0.31946144430844553,
"mc1_stderr": 0.016322644182960498,
"mc2": 0.4806689432668841,
"mc2_stderr": 0.014999748207355675
},
"harness|winogrande|5": {
"acc": 0.7805840568271507,
"acc_stderr": 0.01163126836060778
},
"harness|gsm8k|5": {
"acc": 0.34495830174374525,
"acc_stderr": 0.01309363013366622
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_CallComply__openchat-3.5-0106-11b | [
"region:us"
] | 2024-01-14T19:18:41+00:00 | {"pretty_name": "Evaluation run of CallComply/openchat-3.5-0106-11b", "dataset_summary": "Dataset automatically created during the evaluation run of model [CallComply/openchat-3.5-0106-11b](https://huggingface.co/CallComply/openchat-3.5-0106-11b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CallComply__openchat-3.5-0106-11b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-14T19:16:22.396289](https://huggingface.co/datasets/open-llm-leaderboard/details_CallComply__openchat-3.5-0106-11b/blob/main/results_2024-01-14T19-16-22.396289.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6221695918215556,\n \"acc_stderr\": 0.032672062972624025,\n \"acc_norm\": 0.6283243003334837,\n \"acc_norm_stderr\": 0.033341783944514224,\n \"mc1\": 0.31946144430844553,\n \"mc1_stderr\": 0.016322644182960498,\n \"mc2\": 0.4806689432668841,\n \"mc2_stderr\": 0.014999748207355675\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5981228668941979,\n \"acc_stderr\": 0.014327268614578276,\n \"acc_norm\": 0.636518771331058,\n \"acc_norm_stderr\": 0.014056207319068285\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5804620593507269,\n \"acc_stderr\": 0.00492474850063935,\n \"acc_norm\": 0.7863971320454093,\n \"acc_norm_stderr\": 0.004090119686697031\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.04461960433384739,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.04461960433384739\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6074074074074074,\n \"acc_stderr\": 0.04218506215368879,\n \"acc_norm\": 0.6074074074074074,\n \"acc_norm_stderr\": 0.04218506215368879\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7105263157894737,\n \"acc_stderr\": 0.03690677986137283,\n \"acc_norm\": 0.7105263157894737,\n \"acc_norm_stderr\": 0.03690677986137283\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.63,\n \"acc_stderr\": 0.048523658709391,\n \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.048523658709391\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7094339622641509,\n \"acc_stderr\": 0.027943219989337135,\n \"acc_norm\": 0.7094339622641509,\n \"acc_norm_stderr\": 0.027943219989337135\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6875,\n \"acc_stderr\": 0.038760854559127644,\n \"acc_norm\": 0.6875,\n \"acc_norm_stderr\": 0.038760854559127644\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6416184971098265,\n \"acc_stderr\": 0.036563436533531585,\n \"acc_norm\": 0.6416184971098265,\n \"acc_norm_stderr\": 0.036563436533531585\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4019607843137255,\n \"acc_stderr\": 0.04878608714466996,\n \"acc_norm\": 0.4019607843137255,\n \"acc_norm_stderr\": 0.04878608714466996\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.0446196043338474,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.0446196043338474\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5659574468085107,\n \"acc_stderr\": 0.03240038086792747,\n \"acc_norm\": 0.5659574468085107,\n \"acc_norm_stderr\": 0.03240038086792747\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.42105263157894735,\n \"acc_stderr\": 0.046446020912223177,\n \"acc_norm\": 0.42105263157894735,\n \"acc_norm_stderr\": 0.046446020912223177\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5862068965517241,\n \"acc_stderr\": 0.04104269211806232,\n \"acc_norm\": 0.5862068965517241,\n \"acc_norm_stderr\": 0.04104269211806232\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4417989417989418,\n \"acc_stderr\": 0.02557625706125383,\n \"acc_norm\": 0.4417989417989418,\n \"acc_norm_stderr\": 0.02557625706125383\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.46825396825396826,\n \"acc_stderr\": 0.04463112720677173,\n \"acc_norm\": 0.46825396825396826,\n \"acc_norm_stderr\": 0.04463112720677173\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7645161290322581,\n \"acc_stderr\": 0.02413763242933771,\n \"acc_norm\": 0.7645161290322581,\n \"acc_norm_stderr\": 0.02413763242933771\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.458128078817734,\n \"acc_stderr\": 0.03505630140785741,\n \"acc_norm\": 0.458128078817734,\n \"acc_norm_stderr\": 0.03505630140785741\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.61,\n \"acc_stderr\": 0.04902071300001974,\n \"acc_norm\": 0.61,\n \"acc_norm_stderr\": 0.04902071300001974\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7575757575757576,\n \"acc_stderr\": 0.03346409881055953,\n \"acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.03346409881055953\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7272727272727273,\n \"acc_stderr\": 0.03173071239071724,\n \"acc_norm\": 0.7272727272727273,\n \"acc_norm_stderr\": 0.03173071239071724\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8911917098445595,\n \"acc_stderr\": 0.022473253332768776,\n \"acc_norm\": 0.8911917098445595,\n \"acc_norm_stderr\": 0.022473253332768776\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.023901157979402538,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.023901157979402538\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3074074074074074,\n \"acc_stderr\": 0.02813325257881564,\n \"acc_norm\": 0.3074074074074074,\n \"acc_norm_stderr\": 0.02813325257881564\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6428571428571429,\n \"acc_stderr\": 0.031124619309328177,\n \"acc_norm\": 0.6428571428571429,\n \"acc_norm_stderr\": 0.031124619309328177\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.31125827814569534,\n \"acc_stderr\": 0.03780445850526732,\n \"acc_norm\": 0.31125827814569534,\n \"acc_norm_stderr\": 0.03780445850526732\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8146788990825689,\n \"acc_stderr\": 0.01665927970029582,\n \"acc_norm\": 0.8146788990825689,\n \"acc_norm_stderr\": 0.01665927970029582\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4583333333333333,\n \"acc_stderr\": 0.03398110890294636,\n \"acc_norm\": 0.4583333333333333,\n \"acc_norm_stderr\": 0.03398110890294636\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8088235294117647,\n \"acc_stderr\": 0.027599174300640766,\n \"acc_norm\": 0.8088235294117647,\n \"acc_norm_stderr\": 0.027599174300640766\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8185654008438819,\n \"acc_stderr\": 0.025085961144579654,\n \"acc_norm\": 0.8185654008438819,\n \"acc_norm_stderr\": 0.025085961144579654\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6995515695067265,\n \"acc_stderr\": 0.030769352008229143,\n \"acc_norm\": 0.6995515695067265,\n \"acc_norm_stderr\": 0.030769352008229143\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7251908396946565,\n \"acc_stderr\": 0.039153454088478354,\n \"acc_norm\": 0.7251908396946565,\n \"acc_norm_stderr\": 0.039153454088478354\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7851239669421488,\n \"acc_stderr\": 0.037494924487096966,\n \"acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.037494924487096966\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7592592592592593,\n \"acc_stderr\": 0.04133119440243839,\n \"acc_norm\": 0.7592592592592593,\n \"acc_norm_stderr\": 0.04133119440243839\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7239263803680982,\n \"acc_stderr\": 0.035123852837050475,\n \"acc_norm\": 0.7239263803680982,\n \"acc_norm_stderr\": 0.035123852837050475\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.49107142857142855,\n \"acc_stderr\": 0.04745033255489123,\n \"acc_norm\": 0.49107142857142855,\n \"acc_norm_stderr\": 0.04745033255489123\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8717948717948718,\n \"acc_stderr\": 0.02190190511507333,\n \"acc_norm\": 0.8717948717948718,\n \"acc_norm_stderr\": 0.02190190511507333\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8288633461047255,\n \"acc_stderr\": 0.013468201614066304,\n \"acc_norm\": 0.8288633461047255,\n \"acc_norm_stderr\": 0.013468201614066304\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6936416184971098,\n \"acc_stderr\": 0.024818350129436596,\n \"acc_norm\": 0.6936416184971098,\n \"acc_norm_stderr\": 0.024818350129436596\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.26033519553072626,\n \"acc_stderr\": 0.01467625200931947,\n \"acc_norm\": 0.26033519553072626,\n \"acc_norm_stderr\": 0.01467625200931947\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7124183006535948,\n \"acc_stderr\": 0.02591780611714716,\n \"acc_norm\": 0.7124183006535948,\n \"acc_norm_stderr\": 0.02591780611714716\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6752411575562701,\n \"acc_stderr\": 0.02659678228769704,\n \"acc_norm\": 0.6752411575562701,\n \"acc_norm_stderr\": 0.02659678228769704\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7438271604938271,\n \"acc_stderr\": 0.024288533637726095,\n \"acc_norm\": 0.7438271604938271,\n \"acc_norm_stderr\": 0.024288533637726095\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4929078014184397,\n \"acc_stderr\": 0.02982449855912901,\n \"acc_norm\": 0.4929078014184397,\n \"acc_norm_stderr\": 0.02982449855912901\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4589308996088657,\n \"acc_stderr\": 0.012727084826799802,\n \"acc_norm\": 0.4589308996088657,\n \"acc_norm_stderr\": 0.012727084826799802\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6838235294117647,\n \"acc_stderr\": 0.02824568739146292,\n \"acc_norm\": 0.6838235294117647,\n \"acc_norm_stderr\": 0.02824568739146292\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6356209150326797,\n \"acc_stderr\": 0.019469518221573695,\n \"acc_norm\": 0.6356209150326797,\n \"acc_norm_stderr\": 0.019469518221573695\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6363636363636364,\n \"acc_stderr\": 0.046075820907199756,\n \"acc_norm\": 0.6363636363636364,\n \"acc_norm_stderr\": 0.046075820907199756\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6938775510204082,\n \"acc_stderr\": 0.029504896454595957,\n \"acc_norm\": 0.6938775510204082,\n \"acc_norm_stderr\": 0.029504896454595957\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7960199004975125,\n \"acc_stderr\": 0.02849317624532607,\n \"acc_norm\": 0.7960199004975125,\n \"acc_norm_stderr\": 0.02849317624532607\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.82,\n \"acc_stderr\": 0.038612291966536934,\n \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.038612291966536934\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5060240963855421,\n \"acc_stderr\": 0.03892212195333045,\n \"acc_norm\": 0.5060240963855421,\n \"acc_norm_stderr\": 0.03892212195333045\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8187134502923976,\n \"acc_stderr\": 0.029547741687640038,\n \"acc_norm\": 0.8187134502923976,\n \"acc_norm_stderr\": 0.029547741687640038\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.31946144430844553,\n \"mc1_stderr\": 0.016322644182960498,\n \"mc2\": 0.4806689432668841,\n \"mc2_stderr\": 0.014999748207355675\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7805840568271507,\n \"acc_stderr\": 0.01163126836060778\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.34495830174374525,\n \"acc_stderr\": 0.01309363013366622\n }\n}\n```", "repo_url": "https://huggingface.co/CallComply/openchat-3.5-0106-11b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|arc:challenge|25_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|gsm8k|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hellaswag|10_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T19-16-22.396289.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["**/details_harness|winogrande|5_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-14T19-16-22.396289.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_14T19_16_22.396289", "path": ["results_2024-01-14T19-16-22.396289.parquet"]}, {"split": "latest", "path": ["results_2024-01-14T19-16-22.396289.parquet"]}]}]} | 2024-01-14T19:19:02+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of CallComply/openchat-3.5-0106-11b
Dataset automatically created during the evaluation run of model CallComply/openchat-3.5-0106-11b on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-14T19:16:22.396289(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of CallComply/openchat-3.5-0106-11b\n\n\n\nDataset automatically created during the evaluation run of model CallComply/openchat-3.5-0106-11b on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T19:16:22.396289(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of CallComply/openchat-3.5-0106-11b\n\n\n\nDataset automatically created during the evaluation run of model CallComply/openchat-3.5-0106-11b on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T19:16:22.396289(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
93c237afec08a4ef1e295f5089ed3ca0cf23376b |
# Dataset Card for Evaluation run of AA051611/A0113
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [AA051611/A0113](https://huggingface.co/AA051611/A0113) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_AA051611__A0113",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-14T19:22:00.115237](https://huggingface.co/datasets/open-llm-leaderboard/details_AA051611__A0113/blob/main/results_2024-01-14T19-22-00.115237.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7396629430618338,
"acc_stderr": 0.02895723757690259,
"acc_norm": 0.7443509721070339,
"acc_norm_stderr": 0.02950325667268791,
"mc1": 0.412484700122399,
"mc1_stderr": 0.01723329939957122,
"mc2": 0.5965256915069256,
"mc2_stderr": 0.01518941143132932
},
"harness|arc:challenge|25": {
"acc": 0.6313993174061433,
"acc_stderr": 0.014097810678042194,
"acc_norm": 0.6638225255972696,
"acc_norm_stderr": 0.013804855026205761
},
"harness|hellaswag|10": {
"acc": 0.6549492133041227,
"acc_stderr": 0.00474413282539152,
"acc_norm": 0.848635729934276,
"acc_norm_stderr": 0.0035767110656195833
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.38,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.38,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.7111111111111111,
"acc_stderr": 0.03915450630414251,
"acc_norm": 0.7111111111111111,
"acc_norm_stderr": 0.03915450630414251
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.8552631578947368,
"acc_stderr": 0.028631951845930387,
"acc_norm": 0.8552631578947368,
"acc_norm_stderr": 0.028631951845930387
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909284,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909284
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.8,
"acc_stderr": 0.02461829819586651,
"acc_norm": 0.8,
"acc_norm_stderr": 0.02461829819586651
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8263888888888888,
"acc_stderr": 0.03167473383795718,
"acc_norm": 0.8263888888888888,
"acc_norm_stderr": 0.03167473383795718
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.6,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.6,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001974,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001974
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6878612716763006,
"acc_stderr": 0.03533133389323657,
"acc_norm": 0.6878612716763006,
"acc_norm_stderr": 0.03533133389323657
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.5,
"acc_stderr": 0.04975185951049946,
"acc_norm": 0.5,
"acc_norm_stderr": 0.04975185951049946
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932261,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932261
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.7659574468085106,
"acc_stderr": 0.027678452578212383,
"acc_norm": 0.7659574468085106,
"acc_norm_stderr": 0.027678452578212383
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5877192982456141,
"acc_stderr": 0.04630653203366596,
"acc_norm": 0.5877192982456141,
"acc_norm_stderr": 0.04630653203366596
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.7241379310344828,
"acc_stderr": 0.03724563619774632,
"acc_norm": 0.7241379310344828,
"acc_norm_stderr": 0.03724563619774632
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.6428571428571429,
"acc_stderr": 0.024677862841332783,
"acc_norm": 0.6428571428571429,
"acc_norm_stderr": 0.024677862841332783
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5555555555555556,
"acc_stderr": 0.04444444444444449,
"acc_norm": 0.5555555555555556,
"acc_norm_stderr": 0.04444444444444449
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.867741935483871,
"acc_stderr": 0.019272015434846478,
"acc_norm": 0.867741935483871,
"acc_norm_stderr": 0.019272015434846478
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5812807881773399,
"acc_stderr": 0.03471192860518468,
"acc_norm": 0.5812807881773399,
"acc_norm_stderr": 0.03471192860518468
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.8,
"acc_stderr": 0.040201512610368445,
"acc_norm": 0.8,
"acc_norm_stderr": 0.040201512610368445
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8424242424242424,
"acc_stderr": 0.028450388805284357,
"acc_norm": 0.8424242424242424,
"acc_norm_stderr": 0.028450388805284357
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8888888888888888,
"acc_stderr": 0.022390787638216773,
"acc_norm": 0.8888888888888888,
"acc_norm_stderr": 0.022390787638216773
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9533678756476683,
"acc_stderr": 0.01521676181926258,
"acc_norm": 0.9533678756476683,
"acc_norm_stderr": 0.01521676181926258
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.7923076923076923,
"acc_stderr": 0.020567539567246784,
"acc_norm": 0.7923076923076923,
"acc_norm_stderr": 0.020567539567246784
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.4111111111111111,
"acc_stderr": 0.029999923508706682,
"acc_norm": 0.4111111111111111,
"acc_norm_stderr": 0.029999923508706682
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.8613445378151261,
"acc_stderr": 0.022448264476832583,
"acc_norm": 0.8613445378151261,
"acc_norm_stderr": 0.022448264476832583
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.41721854304635764,
"acc_stderr": 0.040261414976346104,
"acc_norm": 0.41721854304635764,
"acc_norm_stderr": 0.040261414976346104
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9064220183486239,
"acc_stderr": 0.012486841824601963,
"acc_norm": 0.9064220183486239,
"acc_norm_stderr": 0.012486841824601963
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.03214952147802749,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.03214952147802749
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9068627450980392,
"acc_stderr": 0.020397853969426994,
"acc_norm": 0.9068627450980392,
"acc_norm_stderr": 0.020397853969426994
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8987341772151899,
"acc_stderr": 0.019637720526065515,
"acc_norm": 0.8987341772151899,
"acc_norm_stderr": 0.019637720526065515
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7847533632286996,
"acc_stderr": 0.027584066602208274,
"acc_norm": 0.7847533632286996,
"acc_norm_stderr": 0.027584066602208274
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8854961832061069,
"acc_stderr": 0.027927473753597453,
"acc_norm": 0.8854961832061069,
"acc_norm_stderr": 0.027927473753597453
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.859504132231405,
"acc_stderr": 0.03172233426002158,
"acc_norm": 0.859504132231405,
"acc_norm_stderr": 0.03172233426002158
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.9074074074074074,
"acc_stderr": 0.02802188803860943,
"acc_norm": 0.9074074074074074,
"acc_norm_stderr": 0.02802188803860943
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.852760736196319,
"acc_stderr": 0.027839915278339653,
"acc_norm": 0.852760736196319,
"acc_norm_stderr": 0.027839915278339653
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.6160714285714286,
"acc_stderr": 0.046161430750285455,
"acc_norm": 0.6160714285714286,
"acc_norm_stderr": 0.046161430750285455
},
"harness|hendrycksTest-management|5": {
"acc": 0.8932038834951457,
"acc_stderr": 0.030581088928331356,
"acc_norm": 0.8932038834951457,
"acc_norm_stderr": 0.030581088928331356
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9145299145299145,
"acc_stderr": 0.018315891685625852,
"acc_norm": 0.9145299145299145,
"acc_norm_stderr": 0.018315891685625852
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.87,
"acc_stderr": 0.033799766898963086,
"acc_norm": 0.87,
"acc_norm_stderr": 0.033799766898963086
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.9029374201787995,
"acc_stderr": 0.010586474712018302,
"acc_norm": 0.9029374201787995,
"acc_norm_stderr": 0.010586474712018302
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7861271676300579,
"acc_stderr": 0.022075709251757177,
"acc_norm": 0.7861271676300579,
"acc_norm_stderr": 0.022075709251757177
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.6547486033519553,
"acc_stderr": 0.015901432608930358,
"acc_norm": 0.6547486033519553,
"acc_norm_stderr": 0.015901432608930358
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.8169934640522876,
"acc_stderr": 0.022140767512880973,
"acc_norm": 0.8169934640522876,
"acc_norm_stderr": 0.022140767512880973
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.8135048231511254,
"acc_stderr": 0.0221224397724808,
"acc_norm": 0.8135048231511254,
"acc_norm_stderr": 0.0221224397724808
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8240740740740741,
"acc_stderr": 0.02118589361522515,
"acc_norm": 0.8240740740740741,
"acc_norm_stderr": 0.02118589361522515
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5851063829787234,
"acc_stderr": 0.0293922365846125,
"acc_norm": 0.5851063829787234,
"acc_norm_stderr": 0.0293922365846125
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5619295958279009,
"acc_stderr": 0.012671902782567643,
"acc_norm": 0.5619295958279009,
"acc_norm_stderr": 0.012671902782567643
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.8198529411764706,
"acc_stderr": 0.023345163616544855,
"acc_norm": 0.8198529411764706,
"acc_norm_stderr": 0.023345163616544855
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.7761437908496732,
"acc_stderr": 0.016863008585416613,
"acc_norm": 0.7761437908496732,
"acc_norm_stderr": 0.016863008585416613
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7090909090909091,
"acc_stderr": 0.04350271442923243,
"acc_norm": 0.7090909090909091,
"acc_norm_stderr": 0.04350271442923243
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8081632653061225,
"acc_stderr": 0.025206963154225402,
"acc_norm": 0.8081632653061225,
"acc_norm_stderr": 0.025206963154225402
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.9054726368159204,
"acc_stderr": 0.020687186951534094,
"acc_norm": 0.9054726368159204,
"acc_norm_stderr": 0.020687186951534094
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.94,
"acc_stderr": 0.023868325657594173,
"acc_norm": 0.94,
"acc_norm_stderr": 0.023868325657594173
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5662650602409639,
"acc_stderr": 0.03858158940685515,
"acc_norm": 0.5662650602409639,
"acc_norm_stderr": 0.03858158940685515
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8888888888888888,
"acc_stderr": 0.024103384202072864,
"acc_norm": 0.8888888888888888,
"acc_norm_stderr": 0.024103384202072864
},
"harness|truthfulqa:mc|0": {
"mc1": 0.412484700122399,
"mc1_stderr": 0.01723329939957122,
"mc2": 0.5965256915069256,
"mc2_stderr": 0.01518941143132932
},
"harness|winogrande|5": {
"acc": 0.8200473559589582,
"acc_stderr": 0.01079646868806868
},
"harness|gsm8k|5": {
"acc": 0.6087945413191812,
"acc_stderr": 0.0134425024027943
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_AA051611__A0113 | [
"region:us"
] | 2024-01-14T19:24:10+00:00 | {"pretty_name": "Evaluation run of AA051611/A0113", "dataset_summary": "Dataset automatically created during the evaluation run of model [AA051611/A0113](https://huggingface.co/AA051611/A0113) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_AA051611__A0113\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-14T19:22:00.115237](https://huggingface.co/datasets/open-llm-leaderboard/details_AA051611__A0113/blob/main/results_2024-01-14T19-22-00.115237.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7396629430618338,\n \"acc_stderr\": 0.02895723757690259,\n \"acc_norm\": 0.7443509721070339,\n \"acc_norm_stderr\": 0.02950325667268791,\n \"mc1\": 0.412484700122399,\n \"mc1_stderr\": 0.01723329939957122,\n \"mc2\": 0.5965256915069256,\n \"mc2_stderr\": 0.01518941143132932\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6313993174061433,\n \"acc_stderr\": 0.014097810678042194,\n \"acc_norm\": 0.6638225255972696,\n \"acc_norm_stderr\": 0.013804855026205761\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6549492133041227,\n \"acc_stderr\": 0.00474413282539152,\n \"acc_norm\": 0.848635729934276,\n \"acc_norm_stderr\": 0.0035767110656195833\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.7111111111111111,\n \"acc_stderr\": 0.03915450630414251,\n \"acc_norm\": 0.7111111111111111,\n \"acc_norm_stderr\": 0.03915450630414251\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.8552631578947368,\n \"acc_stderr\": 0.028631951845930387,\n \"acc_norm\": 0.8552631578947368,\n \"acc_norm_stderr\": 0.028631951845930387\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909284,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.04292346959909284\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.02461829819586651,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.02461829819586651\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8263888888888888,\n \"acc_stderr\": 0.03167473383795718,\n \"acc_norm\": 0.8263888888888888,\n \"acc_norm_stderr\": 0.03167473383795718\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001974,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001974\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6878612716763006,\n \"acc_stderr\": 0.03533133389323657,\n \"acc_norm\": 0.6878612716763006,\n \"acc_norm_stderr\": 0.03533133389323657\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.04975185951049946,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.04975185951049946\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932261,\n \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932261\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.7659574468085106,\n \"acc_stderr\": 0.027678452578212383,\n \"acc_norm\": 0.7659574468085106,\n \"acc_norm_stderr\": 0.027678452578212383\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5877192982456141,\n \"acc_stderr\": 0.04630653203366596,\n \"acc_norm\": 0.5877192982456141,\n \"acc_norm_stderr\": 0.04630653203366596\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.7241379310344828,\n \"acc_stderr\": 0.03724563619774632,\n \"acc_norm\": 0.7241379310344828,\n \"acc_norm_stderr\": 0.03724563619774632\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.6428571428571429,\n \"acc_stderr\": 0.024677862841332783,\n \"acc_norm\": 0.6428571428571429,\n \"acc_norm_stderr\": 0.024677862841332783\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5555555555555556,\n \"acc_stderr\": 0.04444444444444449,\n \"acc_norm\": 0.5555555555555556,\n \"acc_norm_stderr\": 0.04444444444444449\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.867741935483871,\n \"acc_stderr\": 0.019272015434846478,\n \"acc_norm\": 0.867741935483871,\n \"acc_norm_stderr\": 0.019272015434846478\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5812807881773399,\n \"acc_stderr\": 0.03471192860518468,\n \"acc_norm\": 0.5812807881773399,\n \"acc_norm_stderr\": 0.03471192860518468\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.040201512610368445,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.040201512610368445\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8424242424242424,\n \"acc_stderr\": 0.028450388805284357,\n \"acc_norm\": 0.8424242424242424,\n \"acc_norm_stderr\": 0.028450388805284357\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8888888888888888,\n \"acc_stderr\": 0.022390787638216773,\n \"acc_norm\": 0.8888888888888888,\n \"acc_norm_stderr\": 0.022390787638216773\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9533678756476683,\n \"acc_stderr\": 0.01521676181926258,\n \"acc_norm\": 0.9533678756476683,\n \"acc_norm_stderr\": 0.01521676181926258\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.7923076923076923,\n \"acc_stderr\": 0.020567539567246784,\n \"acc_norm\": 0.7923076923076923,\n \"acc_norm_stderr\": 0.020567539567246784\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.4111111111111111,\n \"acc_stderr\": 0.029999923508706682,\n \"acc_norm\": 0.4111111111111111,\n \"acc_norm_stderr\": 0.029999923508706682\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.8613445378151261,\n \"acc_stderr\": 0.022448264476832583,\n \"acc_norm\": 0.8613445378151261,\n \"acc_norm_stderr\": 0.022448264476832583\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.41721854304635764,\n \"acc_stderr\": 0.040261414976346104,\n \"acc_norm\": 0.41721854304635764,\n \"acc_norm_stderr\": 0.040261414976346104\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.9064220183486239,\n \"acc_stderr\": 0.012486841824601963,\n \"acc_norm\": 0.9064220183486239,\n \"acc_norm_stderr\": 0.012486841824601963\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.03214952147802749,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.03214952147802749\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.9068627450980392,\n \"acc_stderr\": 0.020397853969426994,\n \"acc_norm\": 0.9068627450980392,\n \"acc_norm_stderr\": 0.020397853969426994\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8987341772151899,\n \"acc_stderr\": 0.019637720526065515,\n \"acc_norm\": 0.8987341772151899,\n \"acc_norm_stderr\": 0.019637720526065515\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7847533632286996,\n \"acc_stderr\": 0.027584066602208274,\n \"acc_norm\": 0.7847533632286996,\n \"acc_norm_stderr\": 0.027584066602208274\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8854961832061069,\n \"acc_stderr\": 0.027927473753597453,\n \"acc_norm\": 0.8854961832061069,\n \"acc_norm_stderr\": 0.027927473753597453\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.859504132231405,\n \"acc_stderr\": 0.03172233426002158,\n \"acc_norm\": 0.859504132231405,\n \"acc_norm_stderr\": 0.03172233426002158\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.9074074074074074,\n \"acc_stderr\": 0.02802188803860943,\n \"acc_norm\": 0.9074074074074074,\n \"acc_norm_stderr\": 0.02802188803860943\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.852760736196319,\n \"acc_stderr\": 0.027839915278339653,\n \"acc_norm\": 0.852760736196319,\n \"acc_norm_stderr\": 0.027839915278339653\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.6160714285714286,\n \"acc_stderr\": 0.046161430750285455,\n \"acc_norm\": 0.6160714285714286,\n \"acc_norm_stderr\": 0.046161430750285455\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8932038834951457,\n \"acc_stderr\": 0.030581088928331356,\n \"acc_norm\": 0.8932038834951457,\n \"acc_norm_stderr\": 0.030581088928331356\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9145299145299145,\n \"acc_stderr\": 0.018315891685625852,\n \"acc_norm\": 0.9145299145299145,\n \"acc_norm_stderr\": 0.018315891685625852\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.87,\n \"acc_stderr\": 0.033799766898963086,\n \"acc_norm\": 0.87,\n \"acc_norm_stderr\": 0.033799766898963086\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.9029374201787995,\n \"acc_stderr\": 0.010586474712018302,\n \"acc_norm\": 0.9029374201787995,\n \"acc_norm_stderr\": 0.010586474712018302\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7861271676300579,\n \"acc_stderr\": 0.022075709251757177,\n \"acc_norm\": 0.7861271676300579,\n \"acc_norm_stderr\": 0.022075709251757177\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.6547486033519553,\n \"acc_stderr\": 0.015901432608930358,\n \"acc_norm\": 0.6547486033519553,\n \"acc_norm_stderr\": 0.015901432608930358\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.8169934640522876,\n \"acc_stderr\": 0.022140767512880973,\n \"acc_norm\": 0.8169934640522876,\n \"acc_norm_stderr\": 0.022140767512880973\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.8135048231511254,\n \"acc_stderr\": 0.0221224397724808,\n \"acc_norm\": 0.8135048231511254,\n \"acc_norm_stderr\": 0.0221224397724808\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.8240740740740741,\n \"acc_stderr\": 0.02118589361522515,\n \"acc_norm\": 0.8240740740740741,\n \"acc_norm_stderr\": 0.02118589361522515\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.5851063829787234,\n \"acc_stderr\": 0.0293922365846125,\n \"acc_norm\": 0.5851063829787234,\n \"acc_norm_stderr\": 0.0293922365846125\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5619295958279009,\n \"acc_stderr\": 0.012671902782567643,\n \"acc_norm\": 0.5619295958279009,\n \"acc_norm_stderr\": 0.012671902782567643\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.8198529411764706,\n \"acc_stderr\": 0.023345163616544855,\n \"acc_norm\": 0.8198529411764706,\n \"acc_norm_stderr\": 0.023345163616544855\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.7761437908496732,\n \"acc_stderr\": 0.016863008585416613,\n \"acc_norm\": 0.7761437908496732,\n \"acc_norm_stderr\": 0.016863008585416613\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7090909090909091,\n \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.7090909090909091,\n \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.8081632653061225,\n \"acc_stderr\": 0.025206963154225402,\n \"acc_norm\": 0.8081632653061225,\n \"acc_norm_stderr\": 0.025206963154225402\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.9054726368159204,\n \"acc_stderr\": 0.020687186951534094,\n \"acc_norm\": 0.9054726368159204,\n \"acc_norm_stderr\": 0.020687186951534094\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.94,\n \"acc_stderr\": 0.023868325657594173,\n \"acc_norm\": 0.94,\n \"acc_norm_stderr\": 0.023868325657594173\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5662650602409639,\n \"acc_stderr\": 0.03858158940685515,\n \"acc_norm\": 0.5662650602409639,\n \"acc_norm_stderr\": 0.03858158940685515\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8888888888888888,\n \"acc_stderr\": 0.024103384202072864,\n \"acc_norm\": 0.8888888888888888,\n \"acc_norm_stderr\": 0.024103384202072864\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.412484700122399,\n \"mc1_stderr\": 0.01723329939957122,\n \"mc2\": 0.5965256915069256,\n \"mc2_stderr\": 0.01518941143132932\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8200473559589582,\n \"acc_stderr\": 0.01079646868806868\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6087945413191812,\n \"acc_stderr\": 0.0134425024027943\n }\n}\n```", "repo_url": "https://huggingface.co/AA051611/A0113", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|arc:challenge|25_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|gsm8k|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hellaswag|10_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T19-22-00.115237.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["**/details_harness|winogrande|5_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-14T19-22-00.115237.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_14T19_22_00.115237", "path": ["results_2024-01-14T19-22-00.115237.parquet"]}, {"split": "latest", "path": ["results_2024-01-14T19-22-00.115237.parquet"]}]}]} | 2024-01-14T19:24:32+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of AA051611/A0113
Dataset automatically created during the evaluation run of model AA051611/A0113 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-14T19:22:00.115237(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of AA051611/A0113\n\n\n\nDataset automatically created during the evaluation run of model AA051611/A0113 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T19:22:00.115237(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of AA051611/A0113\n\n\n\nDataset automatically created during the evaluation run of model AA051611/A0113 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T19:22:00.115237(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
1399054f245ffb967c40e9b469932f702503860b |
The text of all the articles from Logic Magazine issues 1-18.
**logic_raw.txt** - The articles are separated by three newlines. Each paragraph is on its own line.
**logic_passages.txt** - The articles, broken up into passages of between 300 to 2000 characters. Each passage is on its own line. | bentarnoff/logic_magazine_raw | [
"language:en",
"license:cc",
"magazine",
"region:us"
] | 2024-01-14T19:29:04+00:00 | {"language": ["en"], "license": "cc", "pretty_name": "Logic Magazine Article Text", "tags": ["magazine"]} | 2024-01-15T02:16:29+00:00 | [] | [
"en"
] | TAGS
#language-English #license-cc #magazine #region-us
|
The text of all the articles from Logic Magazine issues 1-18.
logic_raw.txt - The articles are separated by three newlines. Each paragraph is on its own line.
logic_passages.txt - The articles, broken up into passages of between 300 to 2000 characters. Each passage is on its own line. | [] | [
"TAGS\n#language-English #license-cc #magazine #region-us \n"
] |
88d78dd044a265dd77130111289fb5555cc6f084 |
# Dataset Card for Evaluation run of CallComply/openchat-3.5-0106-128k
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [CallComply/openchat-3.5-0106-128k](https://huggingface.co/CallComply/openchat-3.5-0106-128k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CallComply__openchat-3.5-0106-128k",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-14T19:33:38.391321](https://huggingface.co/datasets/open-llm-leaderboard/details_CallComply__openchat-3.5-0106-128k/blob/main/results_2024-01-14T19-33-38.391321.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5749023148549777,
"acc_stderr": 0.03362057109614855,
"acc_norm": 0.5803055801198537,
"acc_norm_stderr": 0.034322339538364395,
"mc1": 0.31334149326805383,
"mc1_stderr": 0.016238065069059605,
"mc2": 0.46500466840014487,
"mc2_stderr": 0.014848695472788285
},
"harness|arc:challenge|25": {
"acc": 0.5827645051194539,
"acc_stderr": 0.014409825518403079,
"acc_norm": 0.6424914675767918,
"acc_norm_stderr": 0.014005494275916573
},
"harness|hellaswag|10": {
"acc": 0.5573590918143796,
"acc_stderr": 0.004956839256162732,
"acc_norm": 0.7730531766580363,
"acc_norm_stderr": 0.004180018992862959
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5037037037037037,
"acc_stderr": 0.04319223625811331,
"acc_norm": 0.5037037037037037,
"acc_norm_stderr": 0.04319223625811331
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5855263157894737,
"acc_stderr": 0.04008973785779206,
"acc_norm": 0.5855263157894737,
"acc_norm_stderr": 0.04008973785779206
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.660377358490566,
"acc_stderr": 0.029146904747798328,
"acc_norm": 0.660377358490566,
"acc_norm_stderr": 0.029146904747798328
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6319444444444444,
"acc_stderr": 0.04032999053960719,
"acc_norm": 0.6319444444444444,
"acc_norm_stderr": 0.04032999053960719
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.41,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.41,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.41,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.41,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6011560693641619,
"acc_stderr": 0.0373362665538351,
"acc_norm": 0.6011560693641619,
"acc_norm_stderr": 0.0373362665538351
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.29411764705882354,
"acc_stderr": 0.04533838195929776,
"acc_norm": 0.29411764705882354,
"acc_norm_stderr": 0.04533838195929776
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.0440844002276808,
"acc_norm": 0.74,
"acc_norm_stderr": 0.0440844002276808
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5191489361702127,
"acc_stderr": 0.03266204299064678,
"acc_norm": 0.5191489361702127,
"acc_norm_stderr": 0.03266204299064678
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.41228070175438597,
"acc_stderr": 0.04630653203366595,
"acc_norm": 0.41228070175438597,
"acc_norm_stderr": 0.04630653203366595
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5310344827586206,
"acc_stderr": 0.04158632762097828,
"acc_norm": 0.5310344827586206,
"acc_norm_stderr": 0.04158632762097828
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4365079365079365,
"acc_stderr": 0.0255428468174005,
"acc_norm": 0.4365079365079365,
"acc_norm_stderr": 0.0255428468174005
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.373015873015873,
"acc_stderr": 0.04325506042017086,
"acc_norm": 0.373015873015873,
"acc_norm_stderr": 0.04325506042017086
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7032258064516129,
"acc_stderr": 0.025988500792411887,
"acc_norm": 0.7032258064516129,
"acc_norm_stderr": 0.025988500792411887
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.41379310344827586,
"acc_stderr": 0.03465304488406795,
"acc_norm": 0.41379310344827586,
"acc_norm_stderr": 0.03465304488406795
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.64,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.64,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.4909090909090909,
"acc_stderr": 0.0390369864774844,
"acc_norm": 0.4909090909090909,
"acc_norm_stderr": 0.0390369864774844
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.6919191919191919,
"acc_stderr": 0.03289477330098616,
"acc_norm": 0.6919191919191919,
"acc_norm_stderr": 0.03289477330098616
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8756476683937824,
"acc_stderr": 0.023814477086593556,
"acc_norm": 0.8756476683937824,
"acc_norm_stderr": 0.023814477086593556
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5871794871794872,
"acc_stderr": 0.024962683564331796,
"acc_norm": 0.5871794871794872,
"acc_norm_stderr": 0.024962683564331796
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2740740740740741,
"acc_stderr": 0.027195934804085626,
"acc_norm": 0.2740740740740741,
"acc_norm_stderr": 0.027195934804085626
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.5630252100840336,
"acc_stderr": 0.03221943636566196,
"acc_norm": 0.5630252100840336,
"acc_norm_stderr": 0.03221943636566196
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3708609271523179,
"acc_stderr": 0.03943966699183629,
"acc_norm": 0.3708609271523179,
"acc_norm_stderr": 0.03943966699183629
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7761467889908257,
"acc_stderr": 0.017871217767790236,
"acc_norm": 0.7761467889908257,
"acc_norm_stderr": 0.017871217767790236
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4027777777777778,
"acc_stderr": 0.033448873829978666,
"acc_norm": 0.4027777777777778,
"acc_norm_stderr": 0.033448873829978666
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.6176470588235294,
"acc_stderr": 0.0341078533890472,
"acc_norm": 0.6176470588235294,
"acc_norm_stderr": 0.0341078533890472
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7088607594936709,
"acc_stderr": 0.029571601065753378,
"acc_norm": 0.7088607594936709,
"acc_norm_stderr": 0.029571601065753378
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6591928251121076,
"acc_stderr": 0.0318114974705536,
"acc_norm": 0.6591928251121076,
"acc_norm_stderr": 0.0318114974705536
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6946564885496184,
"acc_stderr": 0.040393149787245605,
"acc_norm": 0.6946564885496184,
"acc_norm_stderr": 0.040393149787245605
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7768595041322314,
"acc_stderr": 0.03800754475228732,
"acc_norm": 0.7768595041322314,
"acc_norm_stderr": 0.03800754475228732
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.6944444444444444,
"acc_stderr": 0.044531975073749834,
"acc_norm": 0.6944444444444444,
"acc_norm_stderr": 0.044531975073749834
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.656441717791411,
"acc_stderr": 0.03731133519673893,
"acc_norm": 0.656441717791411,
"acc_norm_stderr": 0.03731133519673893
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.48214285714285715,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.48214285714285715,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.6990291262135923,
"acc_stderr": 0.04541609446503948,
"acc_norm": 0.6990291262135923,
"acc_norm_stderr": 0.04541609446503948
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8547008547008547,
"acc_stderr": 0.023086635086841407,
"acc_norm": 0.8547008547008547,
"acc_norm_stderr": 0.023086635086841407
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.67,
"acc_stderr": 0.04725815626252609,
"acc_norm": 0.67,
"acc_norm_stderr": 0.04725815626252609
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.776500638569604,
"acc_stderr": 0.014897235229450708,
"acc_norm": 0.776500638569604,
"acc_norm_stderr": 0.014897235229450708
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6502890173410405,
"acc_stderr": 0.025674281456531015,
"acc_norm": 0.6502890173410405,
"acc_norm_stderr": 0.025674281456531015
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2536312849162011,
"acc_stderr": 0.014551553659369922,
"acc_norm": 0.2536312849162011,
"acc_norm_stderr": 0.014551553659369922
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6339869281045751,
"acc_stderr": 0.02758281141515962,
"acc_norm": 0.6339869281045751,
"acc_norm_stderr": 0.02758281141515962
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.639871382636656,
"acc_stderr": 0.027264297599804015,
"acc_norm": 0.639871382636656,
"acc_norm_stderr": 0.027264297599804015
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6574074074074074,
"acc_stderr": 0.026406145973625676,
"acc_norm": 0.6574074074074074,
"acc_norm_stderr": 0.026406145973625676
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4574468085106383,
"acc_stderr": 0.029719281272236837,
"acc_norm": 0.4574468085106383,
"acc_norm_stderr": 0.029719281272236837
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3970013037809648,
"acc_stderr": 0.012496346982909556,
"acc_norm": 0.3970013037809648,
"acc_norm_stderr": 0.012496346982909556
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5294117647058824,
"acc_stderr": 0.03032024326500413,
"acc_norm": 0.5294117647058824,
"acc_norm_stderr": 0.03032024326500413
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5866013071895425,
"acc_stderr": 0.01992211568278668,
"acc_norm": 0.5866013071895425,
"acc_norm_stderr": 0.01992211568278668
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.04461272175910509,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.04461272175910509
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6571428571428571,
"acc_stderr": 0.03038726291954773,
"acc_norm": 0.6571428571428571,
"acc_norm_stderr": 0.03038726291954773
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7611940298507462,
"acc_stderr": 0.030147775935409217,
"acc_norm": 0.7611940298507462,
"acc_norm_stderr": 0.030147775935409217
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816508,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816508
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4759036144578313,
"acc_stderr": 0.03887971849597264,
"acc_norm": 0.4759036144578313,
"acc_norm_stderr": 0.03887971849597264
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7719298245614035,
"acc_stderr": 0.032180937956023566,
"acc_norm": 0.7719298245614035,
"acc_norm_stderr": 0.032180937956023566
},
"harness|truthfulqa:mc|0": {
"mc1": 0.31334149326805383,
"mc1_stderr": 0.016238065069059605,
"mc2": 0.46500466840014487,
"mc2_stderr": 0.014848695472788285
},
"harness|winogrande|5": {
"acc": 0.77663772691397,
"acc_stderr": 0.0117056975652052
},
"harness|gsm8k|5": {
"acc": 0.3297952994692949,
"acc_stderr": 0.012949955030571147
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_CallComply__openchat-3.5-0106-128k | [
"region:us"
] | 2024-01-14T19:30:22+00:00 | {"pretty_name": "Evaluation run of CallComply/openchat-3.5-0106-128k", "dataset_summary": "Dataset automatically created during the evaluation run of model [CallComply/openchat-3.5-0106-128k](https://huggingface.co/CallComply/openchat-3.5-0106-128k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CallComply__openchat-3.5-0106-128k\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-14T19:33:38.391321](https://huggingface.co/datasets/open-llm-leaderboard/details_CallComply__openchat-3.5-0106-128k/blob/main/results_2024-01-14T19-33-38.391321.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5749023148549777,\n \"acc_stderr\": 0.03362057109614855,\n \"acc_norm\": 0.5803055801198537,\n \"acc_norm_stderr\": 0.034322339538364395,\n \"mc1\": 0.31334149326805383,\n \"mc1_stderr\": 0.016238065069059605,\n \"mc2\": 0.46500466840014487,\n \"mc2_stderr\": 0.014848695472788285\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5827645051194539,\n \"acc_stderr\": 0.014409825518403079,\n \"acc_norm\": 0.6424914675767918,\n \"acc_norm_stderr\": 0.014005494275916573\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5573590918143796,\n \"acc_stderr\": 0.004956839256162732,\n \"acc_norm\": 0.7730531766580363,\n \"acc_norm_stderr\": 0.004180018992862959\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5037037037037037,\n \"acc_stderr\": 0.04319223625811331,\n \"acc_norm\": 0.5037037037037037,\n \"acc_norm_stderr\": 0.04319223625811331\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.5855263157894737,\n \"acc_stderr\": 0.04008973785779206,\n \"acc_norm\": 0.5855263157894737,\n \"acc_norm_stderr\": 0.04008973785779206\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.660377358490566,\n \"acc_stderr\": 0.029146904747798328,\n \"acc_norm\": 0.660377358490566,\n \"acc_norm_stderr\": 0.029146904747798328\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6319444444444444,\n \"acc_stderr\": 0.04032999053960719,\n \"acc_norm\": 0.6319444444444444,\n \"acc_norm_stderr\": 0.04032999053960719\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6011560693641619,\n \"acc_stderr\": 0.0373362665538351,\n \"acc_norm\": 0.6011560693641619,\n \"acc_norm_stderr\": 0.0373362665538351\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.29411764705882354,\n \"acc_stderr\": 0.04533838195929776,\n \"acc_norm\": 0.29411764705882354,\n \"acc_norm_stderr\": 0.04533838195929776\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5191489361702127,\n \"acc_stderr\": 0.03266204299064678,\n \"acc_norm\": 0.5191489361702127,\n \"acc_norm_stderr\": 0.03266204299064678\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.41228070175438597,\n \"acc_stderr\": 0.04630653203366595,\n \"acc_norm\": 0.41228070175438597,\n \"acc_norm_stderr\": 0.04630653203366595\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5310344827586206,\n \"acc_stderr\": 0.04158632762097828,\n \"acc_norm\": 0.5310344827586206,\n \"acc_norm_stderr\": 0.04158632762097828\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4365079365079365,\n \"acc_stderr\": 0.0255428468174005,\n \"acc_norm\": 0.4365079365079365,\n \"acc_norm_stderr\": 0.0255428468174005\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.373015873015873,\n \"acc_stderr\": 0.04325506042017086,\n \"acc_norm\": 0.373015873015873,\n \"acc_norm_stderr\": 0.04325506042017086\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7032258064516129,\n \"acc_stderr\": 0.025988500792411887,\n \"acc_norm\": 0.7032258064516129,\n \"acc_norm_stderr\": 0.025988500792411887\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.41379310344827586,\n \"acc_stderr\": 0.03465304488406795,\n \"acc_norm\": 0.41379310344827586,\n \"acc_norm_stderr\": 0.03465304488406795\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.64,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\": 0.64,\n \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.4909090909090909,\n \"acc_stderr\": 0.0390369864774844,\n \"acc_norm\": 0.4909090909090909,\n \"acc_norm_stderr\": 0.0390369864774844\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.6919191919191919,\n \"acc_stderr\": 0.03289477330098616,\n \"acc_norm\": 0.6919191919191919,\n \"acc_norm_stderr\": 0.03289477330098616\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8756476683937824,\n \"acc_stderr\": 0.023814477086593556,\n \"acc_norm\": 0.8756476683937824,\n \"acc_norm_stderr\": 0.023814477086593556\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5871794871794872,\n \"acc_stderr\": 0.024962683564331796,\n \"acc_norm\": 0.5871794871794872,\n \"acc_norm_stderr\": 0.024962683564331796\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.2740740740740741,\n \"acc_stderr\": 0.027195934804085626,\n \"acc_norm\": 0.2740740740740741,\n \"acc_norm_stderr\": 0.027195934804085626\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.5630252100840336,\n \"acc_stderr\": 0.03221943636566196,\n \"acc_norm\": 0.5630252100840336,\n \"acc_norm_stderr\": 0.03221943636566196\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3708609271523179,\n \"acc_stderr\": 0.03943966699183629,\n \"acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.03943966699183629\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7761467889908257,\n \"acc_stderr\": 0.017871217767790236,\n \"acc_norm\": 0.7761467889908257,\n \"acc_norm_stderr\": 0.017871217767790236\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4027777777777778,\n \"acc_stderr\": 0.033448873829978666,\n \"acc_norm\": 0.4027777777777778,\n \"acc_norm_stderr\": 0.033448873829978666\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.6176470588235294,\n \"acc_stderr\": 0.0341078533890472,\n \"acc_norm\": 0.6176470588235294,\n \"acc_norm_stderr\": 0.0341078533890472\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7088607594936709,\n \"acc_stderr\": 0.029571601065753378,\n \"acc_norm\": 0.7088607594936709,\n \"acc_norm_stderr\": 0.029571601065753378\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6591928251121076,\n \"acc_stderr\": 0.0318114974705536,\n \"acc_norm\": 0.6591928251121076,\n \"acc_norm_stderr\": 0.0318114974705536\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.6946564885496184,\n \"acc_stderr\": 0.040393149787245605,\n \"acc_norm\": 0.6946564885496184,\n \"acc_norm_stderr\": 0.040393149787245605\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228732,\n \"acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228732\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6944444444444444,\n \"acc_stderr\": 0.044531975073749834,\n \"acc_norm\": 0.6944444444444444,\n \"acc_norm_stderr\": 0.044531975073749834\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.656441717791411,\n \"acc_stderr\": 0.03731133519673893,\n \"acc_norm\": 0.656441717791411,\n \"acc_norm_stderr\": 0.03731133519673893\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.6990291262135923,\n \"acc_stderr\": 0.04541609446503948,\n \"acc_norm\": 0.6990291262135923,\n \"acc_norm_stderr\": 0.04541609446503948\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8547008547008547,\n \"acc_stderr\": 0.023086635086841407,\n \"acc_norm\": 0.8547008547008547,\n \"acc_norm_stderr\": 0.023086635086841407\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.67,\n \"acc_stderr\": 0.04725815626252609,\n \"acc_norm\": 0.67,\n \"acc_norm_stderr\": 0.04725815626252609\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.776500638569604,\n \"acc_stderr\": 0.014897235229450708,\n \"acc_norm\": 0.776500638569604,\n \"acc_norm_stderr\": 0.014897235229450708\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6502890173410405,\n \"acc_stderr\": 0.025674281456531015,\n \"acc_norm\": 0.6502890173410405,\n \"acc_norm_stderr\": 0.025674281456531015\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2536312849162011,\n \"acc_stderr\": 0.014551553659369922,\n \"acc_norm\": 0.2536312849162011,\n \"acc_norm_stderr\": 0.014551553659369922\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6339869281045751,\n \"acc_stderr\": 0.02758281141515962,\n \"acc_norm\": 0.6339869281045751,\n \"acc_norm_stderr\": 0.02758281141515962\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.639871382636656,\n \"acc_stderr\": 0.027264297599804015,\n \"acc_norm\": 0.639871382636656,\n \"acc_norm_stderr\": 0.027264297599804015\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6574074074074074,\n \"acc_stderr\": 0.026406145973625676,\n \"acc_norm\": 0.6574074074074074,\n \"acc_norm_stderr\": 0.026406145973625676\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4574468085106383,\n \"acc_stderr\": 0.029719281272236837,\n \"acc_norm\": 0.4574468085106383,\n \"acc_norm_stderr\": 0.029719281272236837\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3970013037809648,\n \"acc_stderr\": 0.012496346982909556,\n \"acc_norm\": 0.3970013037809648,\n \"acc_norm_stderr\": 0.012496346982909556\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.5294117647058824,\n \"acc_stderr\": 0.03032024326500413,\n \"acc_norm\": 0.5294117647058824,\n \"acc_norm_stderr\": 0.03032024326500413\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.5866013071895425,\n \"acc_stderr\": 0.01992211568278668,\n \"acc_norm\": 0.5866013071895425,\n \"acc_norm_stderr\": 0.01992211568278668\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6571428571428571,\n \"acc_stderr\": 0.03038726291954773,\n \"acc_norm\": 0.6571428571428571,\n \"acc_norm_stderr\": 0.03038726291954773\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7611940298507462,\n \"acc_stderr\": 0.030147775935409217,\n \"acc_norm\": 0.7611940298507462,\n \"acc_norm_stderr\": 0.030147775935409217\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816508,\n \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.04229525846816508\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4759036144578313,\n \"acc_stderr\": 0.03887971849597264,\n \"acc_norm\": 0.4759036144578313,\n \"acc_norm_stderr\": 0.03887971849597264\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7719298245614035,\n \"acc_stderr\": 0.032180937956023566,\n \"acc_norm\": 0.7719298245614035,\n \"acc_norm_stderr\": 0.032180937956023566\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.31334149326805383,\n \"mc1_stderr\": 0.016238065069059605,\n \"mc2\": 0.46500466840014487,\n \"mc2_stderr\": 0.014848695472788285\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.77663772691397,\n \"acc_stderr\": 0.0117056975652052\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3297952994692949,\n \"acc_stderr\": 0.012949955030571147\n }\n}\n```", "repo_url": "https://huggingface.co/CallComply/openchat-3.5-0106-128k", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|arc:challenge|25_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|arc:challenge|25_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|gsm8k|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|gsm8k|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hellaswag|10_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hellaswag|10_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T19-28-00.282158.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T19-33-38.391321.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["**/details_harness|winogrande|5_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["**/details_harness|winogrande|5_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-14T19-33-38.391321.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_14T19_28_00.282158", "path": ["results_2024-01-14T19-28-00.282158.parquet"]}, {"split": "2024_01_14T19_33_38.391321", "path": ["results_2024-01-14T19-33-38.391321.parquet"]}, {"split": "latest", "path": ["results_2024-01-14T19-33-38.391321.parquet"]}]}]} | 2024-01-14T19:35:58+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of CallComply/openchat-3.5-0106-128k
Dataset automatically created during the evaluation run of model CallComply/openchat-3.5-0106-128k on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-14T19:33:38.391321(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of CallComply/openchat-3.5-0106-128k\n\n\n\nDataset automatically created during the evaluation run of model CallComply/openchat-3.5-0106-128k on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T19:33:38.391321(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of CallComply/openchat-3.5-0106-128k\n\n\n\nDataset automatically created during the evaluation run of model CallComply/openchat-3.5-0106-128k on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T19:33:38.391321(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
e2aa3f30d138a7891a55bc16fb25bf12ea0d2f7b |
# Dataset Card for Evaluation run of CallComply/zephyr-7b-beta-128k
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [CallComply/zephyr-7b-beta-128k](https://huggingface.co/CallComply/zephyr-7b-beta-128k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CallComply__zephyr-7b-beta-128k",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-14T19:45:35.717294](https://huggingface.co/datasets/open-llm-leaderboard/details_CallComply__zephyr-7b-beta-128k/blob/main/results_2024-01-14T19-45-35.717294.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5337384150834084,
"acc_stderr": 0.034377622578911936,
"acc_norm": 0.5411488270607204,
"acc_norm_stderr": 0.03515985681109475,
"mc1": 0.30966952264381886,
"mc1_stderr": 0.016185744355144915,
"mc2": 0.4609603387456776,
"mc2_stderr": 0.01568400425776764
},
"harness|arc:challenge|25": {
"acc": 0.5435153583617748,
"acc_stderr": 0.01455594976049644,
"acc_norm": 0.5827645051194539,
"acc_norm_stderr": 0.014409825518403084
},
"harness|hellaswag|10": {
"acc": 0.6016729735112527,
"acc_stderr": 0.004885529674958333,
"acc_norm": 0.8099980083648676,
"acc_norm_stderr": 0.003915007231962104
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.22,
"acc_stderr": 0.0416333199893227,
"acc_norm": 0.22,
"acc_norm_stderr": 0.0416333199893227
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.4962962962962963,
"acc_stderr": 0.04319223625811331,
"acc_norm": 0.4962962962962963,
"acc_norm_stderr": 0.04319223625811331
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5592105263157895,
"acc_stderr": 0.04040311062490436,
"acc_norm": 0.5592105263157895,
"acc_norm_stderr": 0.04040311062490436
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.5735849056603773,
"acc_stderr": 0.03043779434298305,
"acc_norm": 0.5735849056603773,
"acc_norm_stderr": 0.03043779434298305
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6458333333333334,
"acc_stderr": 0.039994111357535424,
"acc_norm": 0.6458333333333334,
"acc_norm_stderr": 0.039994111357535424
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.44,
"acc_stderr": 0.0498887651569859,
"acc_norm": 0.44,
"acc_norm_stderr": 0.0498887651569859
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6069364161849711,
"acc_stderr": 0.0372424959581773,
"acc_norm": 0.6069364161849711,
"acc_norm_stderr": 0.0372424959581773
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107224,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107224
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.68,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.68,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.4340425531914894,
"acc_stderr": 0.032400380867927465,
"acc_norm": 0.4340425531914894,
"acc_norm_stderr": 0.032400380867927465
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.37719298245614036,
"acc_stderr": 0.04559522141958217,
"acc_norm": 0.37719298245614036,
"acc_norm_stderr": 0.04559522141958217
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.43448275862068964,
"acc_stderr": 0.041307408795554966,
"acc_norm": 0.43448275862068964,
"acc_norm_stderr": 0.041307408795554966
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.30687830687830686,
"acc_stderr": 0.02375292871211213,
"acc_norm": 0.30687830687830686,
"acc_norm_stderr": 0.02375292871211213
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.36507936507936506,
"acc_stderr": 0.04306241259127153,
"acc_norm": 0.36507936507936506,
"acc_norm_stderr": 0.04306241259127153
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6548387096774193,
"acc_stderr": 0.02704574657353433,
"acc_norm": 0.6548387096774193,
"acc_norm_stderr": 0.02704574657353433
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.43349753694581283,
"acc_stderr": 0.03486731727419872,
"acc_norm": 0.43349753694581283,
"acc_norm_stderr": 0.03486731727419872
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.5757575757575758,
"acc_stderr": 0.03859268142070264,
"acc_norm": 0.5757575757575758,
"acc_norm_stderr": 0.03859268142070264
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.6717171717171717,
"acc_stderr": 0.03345678422756776,
"acc_norm": 0.6717171717171717,
"acc_norm_stderr": 0.03345678422756776
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.772020725388601,
"acc_stderr": 0.030276909945178263,
"acc_norm": 0.772020725388601,
"acc_norm_stderr": 0.030276909945178263
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5743589743589743,
"acc_stderr": 0.02506909438729653,
"acc_norm": 0.5743589743589743,
"acc_norm_stderr": 0.02506909438729653
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.31851851851851853,
"acc_stderr": 0.028406533090608456,
"acc_norm": 0.31851851851851853,
"acc_norm_stderr": 0.028406533090608456
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.542016806722689,
"acc_stderr": 0.03236361111951941,
"acc_norm": 0.542016806722689,
"acc_norm_stderr": 0.03236361111951941
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3576158940397351,
"acc_stderr": 0.03913453431177258,
"acc_norm": 0.3576158940397351,
"acc_norm_stderr": 0.03913453431177258
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7486238532110092,
"acc_stderr": 0.018599206360287415,
"acc_norm": 0.7486238532110092,
"acc_norm_stderr": 0.018599206360287415
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.49074074074074076,
"acc_stderr": 0.034093869469927006,
"acc_norm": 0.49074074074074076,
"acc_norm_stderr": 0.034093869469927006
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.5980392156862745,
"acc_stderr": 0.034411900234824655,
"acc_norm": 0.5980392156862745,
"acc_norm_stderr": 0.034411900234824655
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.6286919831223629,
"acc_stderr": 0.0314506860074486,
"acc_norm": 0.6286919831223629,
"acc_norm_stderr": 0.0314506860074486
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.57847533632287,
"acc_stderr": 0.03314190222110657,
"acc_norm": 0.57847533632287,
"acc_norm_stderr": 0.03314190222110657
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.549618320610687,
"acc_stderr": 0.04363643698524779,
"acc_norm": 0.549618320610687,
"acc_norm_stderr": 0.04363643698524779
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.6694214876033058,
"acc_stderr": 0.04294340845212094,
"acc_norm": 0.6694214876033058,
"acc_norm_stderr": 0.04294340845212094
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.6203703703703703,
"acc_stderr": 0.04691521224077742,
"acc_norm": 0.6203703703703703,
"acc_norm_stderr": 0.04691521224077742
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6441717791411042,
"acc_stderr": 0.03761521380046734,
"acc_norm": 0.6441717791411042,
"acc_norm_stderr": 0.03761521380046734
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4107142857142857,
"acc_stderr": 0.04669510663875191,
"acc_norm": 0.4107142857142857,
"acc_norm_stderr": 0.04669510663875191
},
"harness|hendrycksTest-management|5": {
"acc": 0.6796116504854369,
"acc_stderr": 0.04620284082280041,
"acc_norm": 0.6796116504854369,
"acc_norm_stderr": 0.04620284082280041
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.7863247863247863,
"acc_stderr": 0.026853450377009154,
"acc_norm": 0.7863247863247863,
"acc_norm_stderr": 0.026853450377009154
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7266922094508301,
"acc_stderr": 0.01593668106262856,
"acc_norm": 0.7266922094508301,
"acc_norm_stderr": 0.01593668106262856
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.5924855491329479,
"acc_stderr": 0.026454578146931494,
"acc_norm": 0.5924855491329479,
"acc_norm_stderr": 0.026454578146931494
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2636871508379888,
"acc_stderr": 0.01473692638376197,
"acc_norm": 0.2636871508379888,
"acc_norm_stderr": 0.01473692638376197
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.5588235294117647,
"acc_stderr": 0.028431095444176643,
"acc_norm": 0.5588235294117647,
"acc_norm_stderr": 0.028431095444176643
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.5884244372990354,
"acc_stderr": 0.02795048149440127,
"acc_norm": 0.5884244372990354,
"acc_norm_stderr": 0.02795048149440127
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.5617283950617284,
"acc_stderr": 0.02760791408740047,
"acc_norm": 0.5617283950617284,
"acc_norm_stderr": 0.02760791408740047
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4219858156028369,
"acc_stderr": 0.029462189233370593,
"acc_norm": 0.4219858156028369,
"acc_norm_stderr": 0.029462189233370593
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3820078226857888,
"acc_stderr": 0.012409564470235562,
"acc_norm": 0.3820078226857888,
"acc_norm_stderr": 0.012409564470235562
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5551470588235294,
"acc_stderr": 0.03018753206032938,
"acc_norm": 0.5551470588235294,
"acc_norm_stderr": 0.03018753206032938
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5506535947712419,
"acc_stderr": 0.02012376652802727,
"acc_norm": 0.5506535947712419,
"acc_norm_stderr": 0.02012376652802727
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6454545454545455,
"acc_stderr": 0.04582004841505417,
"acc_norm": 0.6454545454545455,
"acc_norm_stderr": 0.04582004841505417
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6163265306122448,
"acc_stderr": 0.031130880396235926,
"acc_norm": 0.6163265306122448,
"acc_norm_stderr": 0.031130880396235926
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.6119402985074627,
"acc_stderr": 0.0344578996436275,
"acc_norm": 0.6119402985074627,
"acc_norm_stderr": 0.0344578996436275
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4457831325301205,
"acc_stderr": 0.03869543323472101,
"acc_norm": 0.4457831325301205,
"acc_norm_stderr": 0.03869543323472101
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7368421052631579,
"acc_stderr": 0.03377310252209205,
"acc_norm": 0.7368421052631579,
"acc_norm_stderr": 0.03377310252209205
},
"harness|truthfulqa:mc|0": {
"mc1": 0.30966952264381886,
"mc1_stderr": 0.016185744355144915,
"mc2": 0.4609603387456776,
"mc2_stderr": 0.01568400425776764
},
"harness|winogrande|5": {
"acc": 0.7474348855564326,
"acc_stderr": 0.012211148449394105
},
"harness|gsm8k|5": {
"acc": 0.13040181956027294,
"acc_stderr": 0.009275630324554092
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_CallComply__zephyr-7b-beta-128k | [
"region:us"
] | 2024-01-14T19:47:57+00:00 | {"pretty_name": "Evaluation run of CallComply/zephyr-7b-beta-128k", "dataset_summary": "Dataset automatically created during the evaluation run of model [CallComply/zephyr-7b-beta-128k](https://huggingface.co/CallComply/zephyr-7b-beta-128k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CallComply__zephyr-7b-beta-128k\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-14T19:45:35.717294](https://huggingface.co/datasets/open-llm-leaderboard/details_CallComply__zephyr-7b-beta-128k/blob/main/results_2024-01-14T19-45-35.717294.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5337384150834084,\n \"acc_stderr\": 0.034377622578911936,\n \"acc_norm\": 0.5411488270607204,\n \"acc_norm_stderr\": 0.03515985681109475,\n \"mc1\": 0.30966952264381886,\n \"mc1_stderr\": 0.016185744355144915,\n \"mc2\": 0.4609603387456776,\n \"mc2_stderr\": 0.01568400425776764\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5435153583617748,\n \"acc_stderr\": 0.01455594976049644,\n \"acc_norm\": 0.5827645051194539,\n \"acc_norm_stderr\": 0.014409825518403084\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6016729735112527,\n \"acc_stderr\": 0.004885529674958333,\n \"acc_norm\": 0.8099980083648676,\n \"acc_norm_stderr\": 0.003915007231962104\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.22,\n \"acc_stderr\": 0.0416333199893227,\n \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.0416333199893227\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4962962962962963,\n \"acc_stderr\": 0.04319223625811331,\n \"acc_norm\": 0.4962962962962963,\n \"acc_norm_stderr\": 0.04319223625811331\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.5592105263157895,\n \"acc_stderr\": 0.04040311062490436,\n \"acc_norm\": 0.5592105263157895,\n \"acc_norm_stderr\": 0.04040311062490436\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.5735849056603773,\n \"acc_stderr\": 0.03043779434298305,\n \"acc_norm\": 0.5735849056603773,\n \"acc_norm_stderr\": 0.03043779434298305\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6458333333333334,\n \"acc_stderr\": 0.039994111357535424,\n \"acc_norm\": 0.6458333333333334,\n \"acc_norm_stderr\": 0.039994111357535424\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.44,\n \"acc_stderr\": 0.0498887651569859,\n \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.0498887651569859\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6069364161849711,\n \"acc_stderr\": 0.0372424959581773,\n \"acc_norm\": 0.6069364161849711,\n \"acc_norm_stderr\": 0.0372424959581773\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107224,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107224\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.4340425531914894,\n \"acc_stderr\": 0.032400380867927465,\n \"acc_norm\": 0.4340425531914894,\n \"acc_norm_stderr\": 0.032400380867927465\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.37719298245614036,\n \"acc_stderr\": 0.04559522141958217,\n \"acc_norm\": 0.37719298245614036,\n \"acc_norm_stderr\": 0.04559522141958217\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.43448275862068964,\n \"acc_stderr\": 0.041307408795554966,\n \"acc_norm\": 0.43448275862068964,\n \"acc_norm_stderr\": 0.041307408795554966\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.30687830687830686,\n \"acc_stderr\": 0.02375292871211213,\n \"acc_norm\": 0.30687830687830686,\n \"acc_norm_stderr\": 0.02375292871211213\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.36507936507936506,\n \"acc_stderr\": 0.04306241259127153,\n \"acc_norm\": 0.36507936507936506,\n \"acc_norm_stderr\": 0.04306241259127153\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6548387096774193,\n \"acc_stderr\": 0.02704574657353433,\n \"acc_norm\": 0.6548387096774193,\n \"acc_norm_stderr\": 0.02704574657353433\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.43349753694581283,\n \"acc_stderr\": 0.03486731727419872,\n \"acc_norm\": 0.43349753694581283,\n \"acc_norm_stderr\": 0.03486731727419872\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.5757575757575758,\n \"acc_stderr\": 0.03859268142070264,\n \"acc_norm\": 0.5757575757575758,\n \"acc_norm_stderr\": 0.03859268142070264\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.6717171717171717,\n \"acc_stderr\": 0.03345678422756776,\n \"acc_norm\": 0.6717171717171717,\n \"acc_norm_stderr\": 0.03345678422756776\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.772020725388601,\n \"acc_stderr\": 0.030276909945178263,\n \"acc_norm\": 0.772020725388601,\n \"acc_norm_stderr\": 0.030276909945178263\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5743589743589743,\n \"acc_stderr\": 0.02506909438729653,\n \"acc_norm\": 0.5743589743589743,\n \"acc_norm_stderr\": 0.02506909438729653\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.31851851851851853,\n \"acc_stderr\": 0.028406533090608456,\n \"acc_norm\": 0.31851851851851853,\n \"acc_norm_stderr\": 0.028406533090608456\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.542016806722689,\n \"acc_stderr\": 0.03236361111951941,\n \"acc_norm\": 0.542016806722689,\n \"acc_norm_stderr\": 0.03236361111951941\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7486238532110092,\n \"acc_stderr\": 0.018599206360287415,\n \"acc_norm\": 0.7486238532110092,\n \"acc_norm_stderr\": 0.018599206360287415\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.49074074074074076,\n \"acc_stderr\": 0.034093869469927006,\n \"acc_norm\": 0.49074074074074076,\n \"acc_norm_stderr\": 0.034093869469927006\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.5980392156862745,\n \"acc_stderr\": 0.034411900234824655,\n \"acc_norm\": 0.5980392156862745,\n \"acc_norm_stderr\": 0.034411900234824655\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.6286919831223629,\n \"acc_stderr\": 0.0314506860074486,\n \"acc_norm\": 0.6286919831223629,\n \"acc_norm_stderr\": 0.0314506860074486\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.57847533632287,\n \"acc_stderr\": 0.03314190222110657,\n \"acc_norm\": 0.57847533632287,\n \"acc_norm_stderr\": 0.03314190222110657\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.549618320610687,\n \"acc_stderr\": 0.04363643698524779,\n \"acc_norm\": 0.549618320610687,\n \"acc_norm_stderr\": 0.04363643698524779\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.6694214876033058,\n \"acc_stderr\": 0.04294340845212094,\n \"acc_norm\": 0.6694214876033058,\n \"acc_norm_stderr\": 0.04294340845212094\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6203703703703703,\n \"acc_stderr\": 0.04691521224077742,\n \"acc_norm\": 0.6203703703703703,\n \"acc_norm_stderr\": 0.04691521224077742\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.6441717791411042,\n \"acc_stderr\": 0.03761521380046734,\n \"acc_norm\": 0.6441717791411042,\n \"acc_norm_stderr\": 0.03761521380046734\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4107142857142857,\n \"acc_stderr\": 0.04669510663875191,\n \"acc_norm\": 0.4107142857142857,\n \"acc_norm_stderr\": 0.04669510663875191\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.6796116504854369,\n \"acc_stderr\": 0.04620284082280041,\n \"acc_norm\": 0.6796116504854369,\n \"acc_norm_stderr\": 0.04620284082280041\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.7863247863247863,\n \"acc_stderr\": 0.026853450377009154,\n \"acc_norm\": 0.7863247863247863,\n \"acc_norm_stderr\": 0.026853450377009154\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7266922094508301,\n \"acc_stderr\": 0.01593668106262856,\n \"acc_norm\": 0.7266922094508301,\n \"acc_norm_stderr\": 0.01593668106262856\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.5924855491329479,\n \"acc_stderr\": 0.026454578146931494,\n \"acc_norm\": 0.5924855491329479,\n \"acc_norm_stderr\": 0.026454578146931494\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2636871508379888,\n \"acc_stderr\": 0.01473692638376197,\n \"acc_norm\": 0.2636871508379888,\n \"acc_norm_stderr\": 0.01473692638376197\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.5588235294117647,\n \"acc_stderr\": 0.028431095444176643,\n \"acc_norm\": 0.5588235294117647,\n \"acc_norm_stderr\": 0.028431095444176643\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.5884244372990354,\n \"acc_stderr\": 0.02795048149440127,\n \"acc_norm\": 0.5884244372990354,\n \"acc_norm_stderr\": 0.02795048149440127\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.5617283950617284,\n \"acc_stderr\": 0.02760791408740047,\n \"acc_norm\": 0.5617283950617284,\n \"acc_norm_stderr\": 0.02760791408740047\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4219858156028369,\n \"acc_stderr\": 0.029462189233370593,\n \"acc_norm\": 0.4219858156028369,\n \"acc_norm_stderr\": 0.029462189233370593\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3820078226857888,\n \"acc_stderr\": 0.012409564470235562,\n \"acc_norm\": 0.3820078226857888,\n \"acc_norm_stderr\": 0.012409564470235562\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.5551470588235294,\n \"acc_stderr\": 0.03018753206032938,\n \"acc_norm\": 0.5551470588235294,\n \"acc_norm_stderr\": 0.03018753206032938\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.5506535947712419,\n \"acc_stderr\": 0.02012376652802727,\n \"acc_norm\": 0.5506535947712419,\n \"acc_norm_stderr\": 0.02012376652802727\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6454545454545455,\n \"acc_stderr\": 0.04582004841505417,\n \"acc_norm\": 0.6454545454545455,\n \"acc_norm_stderr\": 0.04582004841505417\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6163265306122448,\n \"acc_stderr\": 0.031130880396235926,\n \"acc_norm\": 0.6163265306122448,\n \"acc_norm_stderr\": 0.031130880396235926\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6119402985074627,\n \"acc_stderr\": 0.0344578996436275,\n \"acc_norm\": 0.6119402985074627,\n \"acc_norm_stderr\": 0.0344578996436275\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4457831325301205,\n \"acc_stderr\": 0.03869543323472101,\n \"acc_norm\": 0.4457831325301205,\n \"acc_norm_stderr\": 0.03869543323472101\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7368421052631579,\n \"acc_stderr\": 0.03377310252209205,\n \"acc_norm\": 0.7368421052631579,\n \"acc_norm_stderr\": 0.03377310252209205\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.30966952264381886,\n \"mc1_stderr\": 0.016185744355144915,\n \"mc2\": 0.4609603387456776,\n \"mc2_stderr\": 0.01568400425776764\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7474348855564326,\n \"acc_stderr\": 0.012211148449394105\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.13040181956027294,\n \"acc_stderr\": 0.009275630324554092\n }\n}\n```", "repo_url": "https://huggingface.co/CallComply/zephyr-7b-beta-128k", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|arc:challenge|25_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|gsm8k|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hellaswag|10_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T19-45-35.717294.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["**/details_harness|winogrande|5_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-14T19-45-35.717294.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_14T19_45_35.717294", "path": ["results_2024-01-14T19-45-35.717294.parquet"]}, {"split": "latest", "path": ["results_2024-01-14T19-45-35.717294.parquet"]}]}]} | 2024-01-14T19:48:19+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of CallComply/zephyr-7b-beta-128k
Dataset automatically created during the evaluation run of model CallComply/zephyr-7b-beta-128k on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-14T19:45:35.717294(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of CallComply/zephyr-7b-beta-128k\n\n\n\nDataset automatically created during the evaluation run of model CallComply/zephyr-7b-beta-128k on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T19:45:35.717294(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of CallComply/zephyr-7b-beta-128k\n\n\n\nDataset automatically created during the evaluation run of model CallComply/zephyr-7b-beta-128k on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T19:45:35.717294(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
b25ad59ffbaac297db33a4029c79c9c33177291a | Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the [Spider Dataset](https://yale-lily.github.io/spider).
Code explanations and links for the model's checkpoints and datasets are on Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql)
Here is the [Hugging Face collection](https://huggingface.co/collections/Marchanjo/mrat-sql-65a671743bb0e70b416561f6), you can download the model's checkpoints and datasets, but to understand is better to go to Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
[paper published in Springer-Nature - International Journal of Information Technology](https://doi.org/10.1007/s41870-023-01342-3), [here the SharedIt link](https://rdcu.be/dff19). [here the pre-print in arXiv](https://arxiv.org/abs/2306.14256).
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
BRACIS 2021: [paper published in Springer Lecture Notes in Computer Science](https://link.springer.com/chapter/10.1007%2F978-3-030-91699-2_35), [here the pre-print in arXiv](https://arxiv.org/abs/2110.03546).
Based on: RAT-SQL+GAP: [Github](https://github.com/awslabs/gap-text2sql). Paper: [AAAI 2021 paper](https://arxiv.org/abs/2012.10309) | Marchanjo/spider-FIT-en-enr-enb | [
"license:cc-by-sa-4.0",
"arxiv:2306.14256",
"arxiv:2110.03546",
"arxiv:2012.10309",
"region:us"
] | 2024-01-14T19:52:47+00:00 | {"license": "cc-by-sa-4.0"} | 2024-01-16T12:36:55+00:00 | [
"2306.14256",
"2110.03546",
"2012.10309"
] | [] | TAGS
#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us
| Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the Spider Dataset.
Code explanations and links for the model's checkpoints and datasets are on Github mRAT-SQL
Here is the Hugging Face collection, you can download the model's checkpoints and datasets, but to understand is better to go to Github mRAT-SQL.
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.
paper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.
BRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.
Based on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper | [
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] | [
"TAGS\n#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us \n",
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] |
db20b69f991431d206c37a391464900406b8805f | Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the [Spider Dataset](https://yale-lily.github.io/spider).
Code explanations and links for the model's checkpoints and datasets are on Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql)
Here is the [Hugging Face collection](https://huggingface.co/collections/Marchanjo/mrat-sql-65a671743bb0e70b416561f6), you can download the model's checkpoints and datasets, but to understand is better to go to Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
[paper published in Springer-Nature - International Journal of Information Technology](https://doi.org/10.1007/s41870-023-01342-3), [here the SharedIt link](https://rdcu.be/dff19). [here the pre-print in arXiv](https://arxiv.org/abs/2306.14256).
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
BRACIS 2021: [paper published in Springer Lecture Notes in Computer Science](https://link.springer.com/chapter/10.1007%2F978-3-030-91699-2_35), [here the pre-print in arXiv](https://arxiv.org/abs/2110.03546).
Based on: RAT-SQL+GAP: [Github](https://github.com/awslabs/gap-text2sql). Paper: [AAAI 2021 paper](https://arxiv.org/abs/2012.10309) | Marchanjo/spider-en-enr-enb | [
"license:cc-by-sa-4.0",
"arxiv:2306.14256",
"arxiv:2110.03546",
"arxiv:2012.10309",
"region:us"
] | 2024-01-14T19:58:28+00:00 | {"license": "cc-by-sa-4.0"} | 2024-01-16T12:37:21+00:00 | [
"2306.14256",
"2110.03546",
"2012.10309"
] | [] | TAGS
#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us
| Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the Spider Dataset.
Code explanations and links for the model's checkpoints and datasets are on Github mRAT-SQL
Here is the Hugging Face collection, you can download the model's checkpoints and datasets, but to understand is better to go to Github mRAT-SQL.
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.
paper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.
BRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.
Based on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper | [
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] | [
"TAGS\n#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us \n",
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] |
c782d04050ba6885a9d1806192057e1cdecc9f80 |
# Dataset Card for Evaluation run of moreh/MoMo-70B-lora-1.8.5-DPO
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [moreh/MoMo-70B-lora-1.8.5-DPO](https://huggingface.co/moreh/MoMo-70B-lora-1.8.5-DPO) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_moreh__MoMo-70B-lora-1.8.5-DPO",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-14T20:00:36.558108](https://huggingface.co/datasets/open-llm-leaderboard/details_moreh__MoMo-70B-lora-1.8.5-DPO/blob/main/results_2024-01-14T20-00-36.558108.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7718244861304054,
"acc_stderr": 0.02796487785418919,
"acc_norm": 0.7749239423331258,
"acc_norm_stderr": 0.0285082622909065,
"mc1": 0.48959608323133413,
"mc1_stderr": 0.017499711430249264,
"mc2": 0.6579360053724295,
"mc2_stderr": 0.014740925357615238
},
"harness|arc:challenge|25": {
"acc": 0.6638225255972696,
"acc_stderr": 0.013804855026205761,
"acc_norm": 0.6953924914675768,
"acc_norm_stderr": 0.013449522109932487
},
"harness|hellaswag|10": {
"acc": 0.6640111531567416,
"acc_stderr": 0.0047136966941316765,
"acc_norm": 0.8560047799243179,
"acc_norm_stderr": 0.00350367366880503
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.41,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.41,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.7111111111111111,
"acc_stderr": 0.03915450630414251,
"acc_norm": 0.7111111111111111,
"acc_norm_stderr": 0.03915450630414251
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.881578947368421,
"acc_stderr": 0.026293995855474928,
"acc_norm": 0.881578947368421,
"acc_norm_stderr": 0.026293995855474928
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.8,
"acc_stderr": 0.04020151261036844,
"acc_norm": 0.8,
"acc_norm_stderr": 0.04020151261036844
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.8301886792452831,
"acc_stderr": 0.02310839379984133,
"acc_norm": 0.8301886792452831,
"acc_norm_stderr": 0.02310839379984133
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.9236111111111112,
"acc_stderr": 0.02221220393834591,
"acc_norm": 0.9236111111111112,
"acc_norm_stderr": 0.02221220393834591
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.59,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.59,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.7745664739884393,
"acc_stderr": 0.031862098516411454,
"acc_norm": 0.7745664739884393,
"acc_norm_stderr": 0.031862098516411454
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.5980392156862745,
"acc_stderr": 0.04878608714466996,
"acc_norm": 0.5980392156862745,
"acc_norm_stderr": 0.04878608714466996
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.81,
"acc_stderr": 0.039427724440366234,
"acc_norm": 0.81,
"acc_norm_stderr": 0.039427724440366234
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.774468085106383,
"acc_stderr": 0.027321078417387536,
"acc_norm": 0.774468085106383,
"acc_norm_stderr": 0.027321078417387536
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.6228070175438597,
"acc_stderr": 0.04559522141958216,
"acc_norm": 0.6228070175438597,
"acc_norm_stderr": 0.04559522141958216
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.8068965517241379,
"acc_stderr": 0.032894455221273995,
"acc_norm": 0.8068965517241379,
"acc_norm_stderr": 0.032894455221273995
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.6825396825396826,
"acc_stderr": 0.023973861998992086,
"acc_norm": 0.6825396825396826,
"acc_norm_stderr": 0.023973861998992086
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5555555555555556,
"acc_stderr": 0.04444444444444449,
"acc_norm": 0.5555555555555556,
"acc_norm_stderr": 0.04444444444444449
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8903225806451613,
"acc_stderr": 0.017776778700485173,
"acc_norm": 0.8903225806451613,
"acc_norm_stderr": 0.017776778700485173
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.645320197044335,
"acc_stderr": 0.0336612448905145,
"acc_norm": 0.645320197044335,
"acc_norm_stderr": 0.0336612448905145
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.83,
"acc_stderr": 0.0377525168068637,
"acc_norm": 0.83,
"acc_norm_stderr": 0.0377525168068637
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8545454545454545,
"acc_stderr": 0.027530196355066584,
"acc_norm": 0.8545454545454545,
"acc_norm_stderr": 0.027530196355066584
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.9343434343434344,
"acc_stderr": 0.017646526677233335,
"acc_norm": 0.9343434343434344,
"acc_norm_stderr": 0.017646526677233335
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9792746113989638,
"acc_stderr": 0.010281417011909046,
"acc_norm": 0.9792746113989638,
"acc_norm_stderr": 0.010281417011909046
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.8153846153846154,
"acc_stderr": 0.019671632413100288,
"acc_norm": 0.8153846153846154,
"acc_norm_stderr": 0.019671632413100288
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.48148148148148145,
"acc_stderr": 0.03046462171889533,
"acc_norm": 0.48148148148148145,
"acc_norm_stderr": 0.03046462171889533
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.8487394957983193,
"acc_stderr": 0.02327425589870794,
"acc_norm": 0.8487394957983193,
"acc_norm_stderr": 0.02327425589870794
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.5827814569536424,
"acc_stderr": 0.040261414976346104,
"acc_norm": 0.5827814569536424,
"acc_norm_stderr": 0.040261414976346104
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9229357798165138,
"acc_stderr": 0.011434381698911096,
"acc_norm": 0.9229357798165138,
"acc_norm_stderr": 0.011434381698911096
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.7037037037037037,
"acc_stderr": 0.03114144782353605,
"acc_norm": 0.7037037037037037,
"acc_norm_stderr": 0.03114144782353605
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9264705882352942,
"acc_stderr": 0.018318855850089678,
"acc_norm": 0.9264705882352942,
"acc_norm_stderr": 0.018318855850089678
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.9071729957805907,
"acc_stderr": 0.018889750550956715,
"acc_norm": 0.9071729957805907,
"acc_norm_stderr": 0.018889750550956715
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.8116591928251121,
"acc_stderr": 0.026241132996407256,
"acc_norm": 0.8116591928251121,
"acc_norm_stderr": 0.026241132996407256
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8931297709923665,
"acc_stderr": 0.027096548624883733,
"acc_norm": 0.8931297709923665,
"acc_norm_stderr": 0.027096548624883733
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8842975206611571,
"acc_stderr": 0.029199802455622804,
"acc_norm": 0.8842975206611571,
"acc_norm_stderr": 0.029199802455622804
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8611111111111112,
"acc_stderr": 0.0334327006286962,
"acc_norm": 0.8611111111111112,
"acc_norm_stderr": 0.0334327006286962
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.8343558282208589,
"acc_stderr": 0.029208296231259104,
"acc_norm": 0.8343558282208589,
"acc_norm_stderr": 0.029208296231259104
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.6160714285714286,
"acc_stderr": 0.04616143075028546,
"acc_norm": 0.6160714285714286,
"acc_norm_stderr": 0.04616143075028546
},
"harness|hendrycksTest-management|5": {
"acc": 0.8932038834951457,
"acc_stderr": 0.030581088928331362,
"acc_norm": 0.8932038834951457,
"acc_norm_stderr": 0.030581088928331362
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9358974358974359,
"acc_stderr": 0.01604626163167314,
"acc_norm": 0.9358974358974359,
"acc_norm_stderr": 0.01604626163167314
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.85,
"acc_stderr": 0.035887028128263734,
"acc_norm": 0.85,
"acc_norm_stderr": 0.035887028128263734
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.9182630906768838,
"acc_stderr": 0.009796913952313168,
"acc_norm": 0.9182630906768838,
"acc_norm_stderr": 0.009796913952313168
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.838150289017341,
"acc_stderr": 0.019829299214925416,
"acc_norm": 0.838150289017341,
"acc_norm_stderr": 0.019829299214925416
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.7016759776536313,
"acc_stderr": 0.01530184004512928,
"acc_norm": 0.7016759776536313,
"acc_norm_stderr": 0.01530184004512928
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.8366013071895425,
"acc_stderr": 0.0211706230112135,
"acc_norm": 0.8366013071895425,
"acc_norm_stderr": 0.0211706230112135
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.8488745980707395,
"acc_stderr": 0.020342749744428634,
"acc_norm": 0.8488745980707395,
"acc_norm_stderr": 0.020342749744428634
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8765432098765432,
"acc_stderr": 0.018303868806891787,
"acc_norm": 0.8765432098765432,
"acc_norm_stderr": 0.018303868806891787
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.6524822695035462,
"acc_stderr": 0.02840662780959095,
"acc_norm": 0.6524822695035462,
"acc_norm_stderr": 0.02840662780959095
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.6166883963494133,
"acc_stderr": 0.012417603662901188,
"acc_norm": 0.6166883963494133,
"acc_norm_stderr": 0.012417603662901188
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.8382352941176471,
"acc_stderr": 0.022368672562886747,
"acc_norm": 0.8382352941176471,
"acc_norm_stderr": 0.022368672562886747
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.8169934640522876,
"acc_stderr": 0.015643069911273337,
"acc_norm": 0.8169934640522876,
"acc_norm_stderr": 0.015643069911273337
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7545454545454545,
"acc_stderr": 0.041220665028782855,
"acc_norm": 0.7545454545454545,
"acc_norm_stderr": 0.041220665028782855
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8285714285714286,
"acc_stderr": 0.024127463462650153,
"acc_norm": 0.8285714285714286,
"acc_norm_stderr": 0.024127463462650153
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8905472636815921,
"acc_stderr": 0.022076326101824667,
"acc_norm": 0.8905472636815921,
"acc_norm_stderr": 0.022076326101824667
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.93,
"acc_stderr": 0.025643239997624294,
"acc_norm": 0.93,
"acc_norm_stderr": 0.025643239997624294
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5843373493975904,
"acc_stderr": 0.03836722176598053,
"acc_norm": 0.5843373493975904,
"acc_norm_stderr": 0.03836722176598053
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8830409356725146,
"acc_stderr": 0.024648068961366152,
"acc_norm": 0.8830409356725146,
"acc_norm_stderr": 0.024648068961366152
},
"harness|truthfulqa:mc|0": {
"mc1": 0.48959608323133413,
"mc1_stderr": 0.017499711430249264,
"mc2": 0.6579360053724295,
"mc2_stderr": 0.014740925357615238
},
"harness|winogrande|5": {
"acc": 0.8413575374901342,
"acc_stderr": 0.010267936243028228
},
"harness|gsm8k|5": {
"acc": 0.7429871114480667,
"acc_stderr": 0.01203678175742868
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_moreh__MoMo-70B-lora-1.8.5-DPO | [
"region:us"
] | 2024-01-14T20:02:44+00:00 | {"pretty_name": "Evaluation run of moreh/MoMo-70B-lora-1.8.5-DPO", "dataset_summary": "Dataset automatically created during the evaluation run of model [moreh/MoMo-70B-lora-1.8.5-DPO](https://huggingface.co/moreh/MoMo-70B-lora-1.8.5-DPO) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_moreh__MoMo-70B-lora-1.8.5-DPO\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-14T20:00:36.558108](https://huggingface.co/datasets/open-llm-leaderboard/details_moreh__MoMo-70B-lora-1.8.5-DPO/blob/main/results_2024-01-14T20-00-36.558108.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7718244861304054,\n \"acc_stderr\": 0.02796487785418919,\n \"acc_norm\": 0.7749239423331258,\n \"acc_norm_stderr\": 0.0285082622909065,\n \"mc1\": 0.48959608323133413,\n \"mc1_stderr\": 0.017499711430249264,\n \"mc2\": 0.6579360053724295,\n \"mc2_stderr\": 0.014740925357615238\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6638225255972696,\n \"acc_stderr\": 0.013804855026205761,\n \"acc_norm\": 0.6953924914675768,\n \"acc_norm_stderr\": 0.013449522109932487\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6640111531567416,\n \"acc_stderr\": 0.0047136966941316765,\n \"acc_norm\": 0.8560047799243179,\n \"acc_norm_stderr\": 0.00350367366880503\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.7111111111111111,\n \"acc_stderr\": 0.03915450630414251,\n \"acc_norm\": 0.7111111111111111,\n \"acc_norm_stderr\": 0.03915450630414251\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.881578947368421,\n \"acc_stderr\": 0.026293995855474928,\n \"acc_norm\": 0.881578947368421,\n \"acc_norm_stderr\": 0.026293995855474928\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036844,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.04020151261036844\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.8301886792452831,\n \"acc_stderr\": 0.02310839379984133,\n \"acc_norm\": 0.8301886792452831,\n \"acc_norm_stderr\": 0.02310839379984133\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.9236111111111112,\n \"acc_stderr\": 0.02221220393834591,\n \"acc_norm\": 0.9236111111111112,\n \"acc_norm_stderr\": 0.02221220393834591\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.59,\n \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.59,\n \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.61,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.61,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7745664739884393,\n \"acc_stderr\": 0.031862098516411454,\n \"acc_norm\": 0.7745664739884393,\n \"acc_norm_stderr\": 0.031862098516411454\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.5980392156862745,\n \"acc_stderr\": 0.04878608714466996,\n \"acc_norm\": 0.5980392156862745,\n \"acc_norm_stderr\": 0.04878608714466996\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.81,\n \"acc_stderr\": 0.039427724440366234,\n \"acc_norm\": 0.81,\n \"acc_norm_stderr\": 0.039427724440366234\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.774468085106383,\n \"acc_stderr\": 0.027321078417387536,\n \"acc_norm\": 0.774468085106383,\n \"acc_norm_stderr\": 0.027321078417387536\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.6228070175438597,\n \"acc_stderr\": 0.04559522141958216,\n \"acc_norm\": 0.6228070175438597,\n \"acc_norm_stderr\": 0.04559522141958216\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.8068965517241379,\n \"acc_stderr\": 0.032894455221273995,\n \"acc_norm\": 0.8068965517241379,\n \"acc_norm_stderr\": 0.032894455221273995\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.6825396825396826,\n \"acc_stderr\": 0.023973861998992086,\n \"acc_norm\": 0.6825396825396826,\n \"acc_norm_stderr\": 0.023973861998992086\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5555555555555556,\n \"acc_stderr\": 0.04444444444444449,\n \"acc_norm\": 0.5555555555555556,\n \"acc_norm_stderr\": 0.04444444444444449\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8903225806451613,\n \"acc_stderr\": 0.017776778700485173,\n \"acc_norm\": 0.8903225806451613,\n \"acc_norm_stderr\": 0.017776778700485173\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.645320197044335,\n \"acc_stderr\": 0.0336612448905145,\n \"acc_norm\": 0.645320197044335,\n \"acc_norm_stderr\": 0.0336612448905145\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.83,\n \"acc_stderr\": 0.0377525168068637,\n \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.0377525168068637\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8545454545454545,\n \"acc_stderr\": 0.027530196355066584,\n \"acc_norm\": 0.8545454545454545,\n \"acc_norm_stderr\": 0.027530196355066584\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.9343434343434344,\n \"acc_stderr\": 0.017646526677233335,\n \"acc_norm\": 0.9343434343434344,\n \"acc_norm_stderr\": 0.017646526677233335\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9792746113989638,\n \"acc_stderr\": 0.010281417011909046,\n \"acc_norm\": 0.9792746113989638,\n \"acc_norm_stderr\": 0.010281417011909046\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.8153846153846154,\n \"acc_stderr\": 0.019671632413100288,\n \"acc_norm\": 0.8153846153846154,\n \"acc_norm_stderr\": 0.019671632413100288\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.48148148148148145,\n \"acc_stderr\": 0.03046462171889533,\n \"acc_norm\": 0.48148148148148145,\n \"acc_norm_stderr\": 0.03046462171889533\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.8487394957983193,\n \"acc_stderr\": 0.02327425589870794,\n \"acc_norm\": 0.8487394957983193,\n \"acc_norm_stderr\": 0.02327425589870794\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.5827814569536424,\n \"acc_stderr\": 0.040261414976346104,\n \"acc_norm\": 0.5827814569536424,\n \"acc_norm_stderr\": 0.040261414976346104\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.9229357798165138,\n \"acc_stderr\": 0.011434381698911096,\n \"acc_norm\": 0.9229357798165138,\n \"acc_norm_stderr\": 0.011434381698911096\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.7037037037037037,\n \"acc_stderr\": 0.03114144782353605,\n \"acc_norm\": 0.7037037037037037,\n \"acc_norm_stderr\": 0.03114144782353605\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.9264705882352942,\n \"acc_stderr\": 0.018318855850089678,\n \"acc_norm\": 0.9264705882352942,\n \"acc_norm_stderr\": 0.018318855850089678\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.9071729957805907,\n \"acc_stderr\": 0.018889750550956715,\n \"acc_norm\": 0.9071729957805907,\n \"acc_norm_stderr\": 0.018889750550956715\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.8116591928251121,\n \"acc_stderr\": 0.026241132996407256,\n \"acc_norm\": 0.8116591928251121,\n \"acc_norm_stderr\": 0.026241132996407256\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8931297709923665,\n \"acc_stderr\": 0.027096548624883733,\n \"acc_norm\": 0.8931297709923665,\n \"acc_norm_stderr\": 0.027096548624883733\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8842975206611571,\n \"acc_stderr\": 0.029199802455622804,\n \"acc_norm\": 0.8842975206611571,\n \"acc_norm_stderr\": 0.029199802455622804\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8611111111111112,\n \"acc_stderr\": 0.0334327006286962,\n \"acc_norm\": 0.8611111111111112,\n \"acc_norm_stderr\": 0.0334327006286962\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.8343558282208589,\n \"acc_stderr\": 0.029208296231259104,\n \"acc_norm\": 0.8343558282208589,\n \"acc_norm_stderr\": 0.029208296231259104\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.6160714285714286,\n \"acc_stderr\": 0.04616143075028546,\n \"acc_norm\": 0.6160714285714286,\n \"acc_norm_stderr\": 0.04616143075028546\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8932038834951457,\n \"acc_stderr\": 0.030581088928331362,\n \"acc_norm\": 0.8932038834951457,\n \"acc_norm_stderr\": 0.030581088928331362\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9358974358974359,\n \"acc_stderr\": 0.01604626163167314,\n \"acc_norm\": 0.9358974358974359,\n \"acc_norm_stderr\": 0.01604626163167314\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.85,\n \"acc_stderr\": 0.035887028128263734,\n \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.035887028128263734\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.9182630906768838,\n \"acc_stderr\": 0.009796913952313168,\n \"acc_norm\": 0.9182630906768838,\n \"acc_norm_stderr\": 0.009796913952313168\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.838150289017341,\n \"acc_stderr\": 0.019829299214925416,\n \"acc_norm\": 0.838150289017341,\n \"acc_norm_stderr\": 0.019829299214925416\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.7016759776536313,\n \"acc_stderr\": 0.01530184004512928,\n \"acc_norm\": 0.7016759776536313,\n \"acc_norm_stderr\": 0.01530184004512928\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.8366013071895425,\n \"acc_stderr\": 0.0211706230112135,\n \"acc_norm\": 0.8366013071895425,\n \"acc_norm_stderr\": 0.0211706230112135\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.8488745980707395,\n \"acc_stderr\": 0.020342749744428634,\n \"acc_norm\": 0.8488745980707395,\n \"acc_norm_stderr\": 0.020342749744428634\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.8765432098765432,\n \"acc_stderr\": 0.018303868806891787,\n \"acc_norm\": 0.8765432098765432,\n \"acc_norm_stderr\": 0.018303868806891787\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.6524822695035462,\n \"acc_stderr\": 0.02840662780959095,\n \"acc_norm\": 0.6524822695035462,\n \"acc_norm_stderr\": 0.02840662780959095\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.6166883963494133,\n \"acc_stderr\": 0.012417603662901188,\n \"acc_norm\": 0.6166883963494133,\n \"acc_norm_stderr\": 0.012417603662901188\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.8382352941176471,\n \"acc_stderr\": 0.022368672562886747,\n \"acc_norm\": 0.8382352941176471,\n \"acc_norm_stderr\": 0.022368672562886747\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.8169934640522876,\n \"acc_stderr\": 0.015643069911273337,\n \"acc_norm\": 0.8169934640522876,\n \"acc_norm_stderr\": 0.015643069911273337\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7545454545454545,\n \"acc_stderr\": 0.041220665028782855,\n \"acc_norm\": 0.7545454545454545,\n \"acc_norm_stderr\": 0.041220665028782855\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.8285714285714286,\n \"acc_stderr\": 0.024127463462650153,\n \"acc_norm\": 0.8285714285714286,\n \"acc_norm_stderr\": 0.024127463462650153\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8905472636815921,\n \"acc_stderr\": 0.022076326101824667,\n \"acc_norm\": 0.8905472636815921,\n \"acc_norm_stderr\": 0.022076326101824667\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.93,\n \"acc_stderr\": 0.025643239997624294,\n \"acc_norm\": 0.93,\n \"acc_norm_stderr\": 0.025643239997624294\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5843373493975904,\n \"acc_stderr\": 0.03836722176598053,\n \"acc_norm\": 0.5843373493975904,\n \"acc_norm_stderr\": 0.03836722176598053\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8830409356725146,\n \"acc_stderr\": 0.024648068961366152,\n \"acc_norm\": 0.8830409356725146,\n \"acc_norm_stderr\": 0.024648068961366152\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.48959608323133413,\n \"mc1_stderr\": 0.017499711430249264,\n \"mc2\": 0.6579360053724295,\n \"mc2_stderr\": 0.014740925357615238\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8413575374901342,\n \"acc_stderr\": 0.010267936243028228\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7429871114480667,\n \"acc_stderr\": 0.01203678175742868\n }\n}\n```", "repo_url": "https://huggingface.co/moreh/MoMo-70B-lora-1.8.5-DPO", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|arc:challenge|25_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|gsm8k|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hellaswag|10_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T20-00-36.558108.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["**/details_harness|winogrande|5_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-14T20-00-36.558108.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_14T20_00_36.558108", "path": ["results_2024-01-14T20-00-36.558108.parquet"]}, {"split": "latest", "path": ["results_2024-01-14T20-00-36.558108.parquet"]}]}]} | 2024-01-14T20:03:05+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of moreh/MoMo-70B-lora-1.8.5-DPO
Dataset automatically created during the evaluation run of model moreh/MoMo-70B-lora-1.8.5-DPO on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-14T20:00:36.558108(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of moreh/MoMo-70B-lora-1.8.5-DPO\n\n\n\nDataset automatically created during the evaluation run of model moreh/MoMo-70B-lora-1.8.5-DPO on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T20:00:36.558108(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of moreh/MoMo-70B-lora-1.8.5-DPO\n\n\n\nDataset automatically created during the evaluation run of model moreh/MoMo-70B-lora-1.8.5-DPO on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T20:00:36.558108(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
d61a05cd1ad7c1f14078dd4e7bcc93257747f4c4 | # Dataset Card for "DoctorKelp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | KeynesYouDigIt/DoctorKelp | [
"region:us"
] | 2024-01-14T20:18:05+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "test_satellite", "1": "train_kelp", "2": "train_satellite"}}}}], "splits": [{"name": "train", "num_bytes": 28827196275.44, "num_examples": 22540}, {"name": "test", "num_bytes": 3643649767.064, "num_examples": 2852}], "download_size": 18049706797, "dataset_size": 32470846042.503998}} | 2024-01-14T20:44:29+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "DoctorKelp"
More Information needed | [
"# Dataset Card for \"DoctorKelp\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"DoctorKelp\"\n\nMore Information needed"
] |
c147d179b4796e0f78993d80160b5195b6bcb035 |
# Dataset of mai/マイ/마이 (Touhou)
This is the dataset of mai/マイ/마이 (Touhou), containing 159 images and their tags.
The core tags of this character are `blue_hair, bow, blue_eyes, hair_bow, short_hair, wings, ribbon, angel_wings`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 159 | 143.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mai_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 159 | 95.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mai_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 291 | 177.63 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mai_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 159 | 131.44 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mai_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 291 | 231.82 MiB | [Download](https://huggingface.co/datasets/CyberHarem/mai_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/mai_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, dress, smile, solo, purple_eyes |
| 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, dress, solo |
| 2 | 22 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, puffy_short_sleeves, white_wings, feathered_wings, solo, white_dress, white_bow, bangs, buttons, looking_at_viewer, closed_mouth, breasts, black_ribbon, smile, frilled_sleeves, black_sash, blush |
| 3 | 12 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 2girls, blonde_hair, dress, blush, hat, smile |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | dress | smile | solo | purple_eyes | puffy_short_sleeves | white_wings | feathered_wings | white_dress | white_bow | bangs | buttons | looking_at_viewer | closed_mouth | breasts | black_ribbon | frilled_sleeves | black_sash | blush | 2girls | blonde_hair | hat |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:--------|:--------|:-------|:--------------|:----------------------|:--------------|:------------------|:--------------|:------------|:--------|:----------|:--------------------|:---------------|:----------|:---------------|:------------------|:-------------|:--------|:---------|:--------------|:------|
| 0 | 5 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | X | | | | | | | | | | | | | | | | | | |
| 2 | 22 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | |
| 3 | 12 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | | X | X | | | | | | | | | | | | | | | | X | X | X | X |
| CyberHarem/mai_touhou | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-14T20:19:33+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-14T21:00:36+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of mai/マイ/마이 (Touhou)
=============================
This is the dataset of mai/マイ/마이 (Touhou), containing 159 images and their tags.
The core tags of this character are 'blue\_hair, bow, blue\_eyes, hair\_bow, short\_hair, wings, ribbon, angel\_wings', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
bc95b15e7fbcc7b74bae067aa6cc3638e94207f0 | # Dataset Card for "distilabel-intel-orca-dpo-pairs-binarized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | floleuerer/distilabel-intel-orca-dpo-pairs-binarized | [
"region:us"
] | 2024-01-14T20:28:42+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "chosen", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "rejected", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 24252252.089665655, "num_examples": 5625}, {"name": "test", "num_bytes": 1280518.9103343466, "num_examples": 297}], "download_size": 13698335, "dataset_size": 25532771.0}} | 2024-01-14T20:31:36+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "distilabel-intel-orca-dpo-pairs-binarized"
More Information needed | [
"# Dataset Card for \"distilabel-intel-orca-dpo-pairs-binarized\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"distilabel-intel-orca-dpo-pairs-binarized\"\n\nMore Information needed"
] |
1cf34e2e4f7e56d0fb8279d7feb6ba958e063b79 |
This dataset contains nearly 18,000 European Member of Parliament (meps) speeches beween 2019 and 2023.
The speeches are from Italian, German, French and Belgium meps.
All the speeches were gently scraped for the european parliament website using this code: https://github.com/misclassified/meps-text-mining | misclassified/meps_speeches | [
"license:apache-2.0",
"region:us"
] | 2024-01-14T20:32:39+00:00 | {"license": "apache-2.0"} | 2024-01-14T20:45:53+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
|
This dataset contains nearly 18,000 European Member of Parliament (meps) speeches beween 2019 and 2023.
The speeches are from Italian, German, French and Belgium meps.
All the speeches were gently scraped for the european parliament website using this code: URL | [] | [
"TAGS\n#license-apache-2.0 #region-us \n"
] |
881b64d8ff972c225ee6c1dcd8897f15b82db21e |
# Dataset Card for Evaluation run of Jaume/openchat-3.5-0106-mod-gpt5
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Jaume/openchat-3.5-0106-mod-gpt5](https://huggingface.co/Jaume/openchat-3.5-0106-mod-gpt5) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Jaume__openchat-3.5-0106-mod-gpt5",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-14T21:01:35.974498](https://huggingface.co/datasets/open-llm-leaderboard/details_Jaume__openchat-3.5-0106-mod-gpt5/blob/main/results_2024-01-14T21-01-35.974498.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6528578653707416,
"acc_stderr": 0.031849870154313474,
"acc_norm": 0.6535559561419437,
"acc_norm_stderr": 0.03250454817189663,
"mc1": 0.35862913096695226,
"mc1_stderr": 0.016789289499502022,
"mc2": 0.5189602568049447,
"mc2_stderr": 0.015303685990455876
},
"harness|arc:challenge|25": {
"acc": 0.621160409556314,
"acc_stderr": 0.014175915490000324,
"acc_norm": 0.6604095563139932,
"acc_norm_stderr": 0.01383903976282017
},
"harness|hellaswag|10": {
"acc": 0.6338378809002191,
"acc_stderr": 0.0048076995399734075,
"acc_norm": 0.8293168691495718,
"acc_norm_stderr": 0.0037546293132751625
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6444444444444445,
"acc_stderr": 0.04135176749720385,
"acc_norm": 0.6444444444444445,
"acc_norm_stderr": 0.04135176749720385
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6907894736842105,
"acc_stderr": 0.037610708698674805,
"acc_norm": 0.6907894736842105,
"acc_norm_stderr": 0.037610708698674805
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.65,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.65,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7094339622641509,
"acc_stderr": 0.02794321998933714,
"acc_norm": 0.7094339622641509,
"acc_norm_stderr": 0.02794321998933714
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7708333333333334,
"acc_stderr": 0.03514697467862388,
"acc_norm": 0.7708333333333334,
"acc_norm_stderr": 0.03514697467862388
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.57,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.57,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.38,
"acc_stderr": 0.04878317312145634,
"acc_norm": 0.38,
"acc_norm_stderr": 0.04878317312145634
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6878612716763006,
"acc_stderr": 0.03533133389323657,
"acc_norm": 0.6878612716763006,
"acc_norm_stderr": 0.03533133389323657
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.35294117647058826,
"acc_stderr": 0.04755129616062947,
"acc_norm": 0.35294117647058826,
"acc_norm_stderr": 0.04755129616062947
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768079,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768079
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5829787234042553,
"acc_stderr": 0.03223276266711712,
"acc_norm": 0.5829787234042553,
"acc_norm_stderr": 0.03223276266711712
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4649122807017544,
"acc_stderr": 0.04692008381368909,
"acc_norm": 0.4649122807017544,
"acc_norm_stderr": 0.04692008381368909
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.593103448275862,
"acc_stderr": 0.04093793981266236,
"acc_norm": 0.593103448275862,
"acc_norm_stderr": 0.04093793981266236
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42592592592592593,
"acc_stderr": 0.02546714904546955,
"acc_norm": 0.42592592592592593,
"acc_norm_stderr": 0.02546714904546955
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5079365079365079,
"acc_stderr": 0.044715725362943486,
"acc_norm": 0.5079365079365079,
"acc_norm_stderr": 0.044715725362943486
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542127,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542127
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7903225806451613,
"acc_stderr": 0.02315787934908353,
"acc_norm": 0.7903225806451613,
"acc_norm_stderr": 0.02315787934908353
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4975369458128079,
"acc_stderr": 0.03517945038691063,
"acc_norm": 0.4975369458128079,
"acc_norm_stderr": 0.03517945038691063
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7818181818181819,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.7818181818181819,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7828282828282829,
"acc_stderr": 0.02937661648494562,
"acc_norm": 0.7828282828282829,
"acc_norm_stderr": 0.02937661648494562
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8911917098445595,
"acc_stderr": 0.022473253332768763,
"acc_norm": 0.8911917098445595,
"acc_norm_stderr": 0.022473253332768763
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6641025641025641,
"acc_stderr": 0.023946724741563973,
"acc_norm": 0.6641025641025641,
"acc_norm_stderr": 0.023946724741563973
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3592592592592593,
"acc_stderr": 0.029252905927251972,
"acc_norm": 0.3592592592592593,
"acc_norm_stderr": 0.029252905927251972
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6848739495798319,
"acc_stderr": 0.030176808288974337,
"acc_norm": 0.6848739495798319,
"acc_norm_stderr": 0.030176808288974337
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3509933774834437,
"acc_stderr": 0.03896981964257375,
"acc_norm": 0.3509933774834437,
"acc_norm_stderr": 0.03896981964257375
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8495412844036697,
"acc_stderr": 0.015328563932669237,
"acc_norm": 0.8495412844036697,
"acc_norm_stderr": 0.015328563932669237
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.49537037037037035,
"acc_stderr": 0.03409825519163572,
"acc_norm": 0.49537037037037035,
"acc_norm_stderr": 0.03409825519163572
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8284313725490197,
"acc_stderr": 0.026460569561240644,
"acc_norm": 0.8284313725490197,
"acc_norm_stderr": 0.026460569561240644
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8185654008438819,
"acc_stderr": 0.025085961144579647,
"acc_norm": 0.8185654008438819,
"acc_norm_stderr": 0.025085961144579647
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7130044843049327,
"acc_stderr": 0.030360379710291943,
"acc_norm": 0.7130044843049327,
"acc_norm_stderr": 0.030360379710291943
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7709923664122137,
"acc_stderr": 0.036853466317118506,
"acc_norm": 0.7709923664122137,
"acc_norm_stderr": 0.036853466317118506
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7933884297520661,
"acc_stderr": 0.03695980128098824,
"acc_norm": 0.7933884297520661,
"acc_norm_stderr": 0.03695980128098824
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7592592592592593,
"acc_stderr": 0.04133119440243839,
"acc_norm": 0.7592592592592593,
"acc_norm_stderr": 0.04133119440243839
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7730061349693251,
"acc_stderr": 0.03291099578615769,
"acc_norm": 0.7730061349693251,
"acc_norm_stderr": 0.03291099578615769
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4642857142857143,
"acc_stderr": 0.04733667890053756,
"acc_norm": 0.4642857142857143,
"acc_norm_stderr": 0.04733667890053756
},
"harness|hendrycksTest-management|5": {
"acc": 0.8252427184466019,
"acc_stderr": 0.03760178006026621,
"acc_norm": 0.8252427184466019,
"acc_norm_stderr": 0.03760178006026621
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8931623931623932,
"acc_stderr": 0.02023714900899093,
"acc_norm": 0.8931623931623932,
"acc_norm_stderr": 0.02023714900899093
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.79,
"acc_stderr": 0.04093601807403326,
"acc_norm": 0.79,
"acc_norm_stderr": 0.04093601807403326
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8365261813537676,
"acc_stderr": 0.013223928616741626,
"acc_norm": 0.8365261813537676,
"acc_norm_stderr": 0.013223928616741626
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7601156069364162,
"acc_stderr": 0.022989592543123563,
"acc_norm": 0.7601156069364162,
"acc_norm_stderr": 0.022989592543123563
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2424581005586592,
"acc_stderr": 0.014333522059217889,
"acc_norm": 0.2424581005586592,
"acc_norm_stderr": 0.014333522059217889
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.761437908496732,
"acc_stderr": 0.02440439492808787,
"acc_norm": 0.761437908496732,
"acc_norm_stderr": 0.02440439492808787
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.707395498392283,
"acc_stderr": 0.02583989833487798,
"acc_norm": 0.707395498392283,
"acc_norm_stderr": 0.02583989833487798
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7623456790123457,
"acc_stderr": 0.023683591837008557,
"acc_norm": 0.7623456790123457,
"acc_norm_stderr": 0.023683591837008557
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.48936170212765956,
"acc_stderr": 0.029820747191422473,
"acc_norm": 0.48936170212765956,
"acc_norm_stderr": 0.029820747191422473
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4869621903520209,
"acc_stderr": 0.012765893883835332,
"acc_norm": 0.4869621903520209,
"acc_norm_stderr": 0.012765893883835332
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7352941176470589,
"acc_stderr": 0.02679956202488766,
"acc_norm": 0.7352941176470589,
"acc_norm_stderr": 0.02679956202488766
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6748366013071896,
"acc_stderr": 0.01895088677080631,
"acc_norm": 0.6748366013071896,
"acc_norm_stderr": 0.01895088677080631
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.746938775510204,
"acc_stderr": 0.027833023871399673,
"acc_norm": 0.746938775510204,
"acc_norm_stderr": 0.027833023871399673
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.845771144278607,
"acc_stderr": 0.025538433368578334,
"acc_norm": 0.845771144278607,
"acc_norm_stderr": 0.025538433368578334
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.03487350880197768,
"acc_norm": 0.86,
"acc_norm_stderr": 0.03487350880197768
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5240963855421686,
"acc_stderr": 0.03887971849597264,
"acc_norm": 0.5240963855421686,
"acc_norm_stderr": 0.03887971849597264
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8245614035087719,
"acc_stderr": 0.02917088550072767,
"acc_norm": 0.8245614035087719,
"acc_norm_stderr": 0.02917088550072767
},
"harness|truthfulqa:mc|0": {
"mc1": 0.35862913096695226,
"mc1_stderr": 0.016789289499502022,
"mc2": 0.5189602568049447,
"mc2_stderr": 0.015303685990455876
},
"harness|winogrande|5": {
"acc": 0.8176795580110497,
"acc_stderr": 0.010851565594267195
},
"harness|gsm8k|5": {
"acc": 0.6815769522365428,
"acc_stderr": 0.01283222572307541
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_Jaume__openchat-3.5-0106-mod-gpt5 | [
"region:us"
] | 2024-01-14T20:43:38+00:00 | {"pretty_name": "Evaluation run of Jaume/openchat-3.5-0106-mod-gpt5", "dataset_summary": "Dataset automatically created during the evaluation run of model [Jaume/openchat-3.5-0106-mod-gpt5](https://huggingface.co/Jaume/openchat-3.5-0106-mod-gpt5) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Jaume__openchat-3.5-0106-mod-gpt5\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-14T21:01:35.974498](https://huggingface.co/datasets/open-llm-leaderboard/details_Jaume__openchat-3.5-0106-mod-gpt5/blob/main/results_2024-01-14T21-01-35.974498.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6528578653707416,\n \"acc_stderr\": 0.031849870154313474,\n \"acc_norm\": 0.6535559561419437,\n \"acc_norm_stderr\": 0.03250454817189663,\n \"mc1\": 0.35862913096695226,\n \"mc1_stderr\": 0.016789289499502022,\n \"mc2\": 0.5189602568049447,\n \"mc2_stderr\": 0.015303685990455876\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.621160409556314,\n \"acc_stderr\": 0.014175915490000324,\n \"acc_norm\": 0.6604095563139932,\n \"acc_norm_stderr\": 0.01383903976282017\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6338378809002191,\n \"acc_stderr\": 0.0048076995399734075,\n \"acc_norm\": 0.8293168691495718,\n \"acc_norm_stderr\": 0.0037546293132751625\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6444444444444445,\n \"acc_stderr\": 0.04135176749720385,\n \"acc_norm\": 0.6444444444444445,\n \"acc_norm_stderr\": 0.04135176749720385\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6907894736842105,\n \"acc_stderr\": 0.037610708698674805,\n \"acc_norm\": 0.6907894736842105,\n \"acc_norm_stderr\": 0.037610708698674805\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7094339622641509,\n \"acc_stderr\": 0.02794321998933714,\n \"acc_norm\": 0.7094339622641509,\n \"acc_norm_stderr\": 0.02794321998933714\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7708333333333334,\n \"acc_stderr\": 0.03514697467862388,\n \"acc_norm\": 0.7708333333333334,\n \"acc_norm_stderr\": 0.03514697467862388\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.57,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.57,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145634,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145634\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6878612716763006,\n \"acc_stderr\": 0.03533133389323657,\n \"acc_norm\": 0.6878612716763006,\n \"acc_norm_stderr\": 0.03533133389323657\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.35294117647058826,\n \"acc_stderr\": 0.04755129616062947,\n \"acc_norm\": 0.35294117647058826,\n \"acc_norm_stderr\": 0.04755129616062947\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768079,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768079\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5829787234042553,\n \"acc_stderr\": 0.03223276266711712,\n \"acc_norm\": 0.5829787234042553,\n \"acc_norm_stderr\": 0.03223276266711712\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4649122807017544,\n \"acc_stderr\": 0.04692008381368909,\n \"acc_norm\": 0.4649122807017544,\n \"acc_norm_stderr\": 0.04692008381368909\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.593103448275862,\n \"acc_stderr\": 0.04093793981266236,\n \"acc_norm\": 0.593103448275862,\n \"acc_norm_stderr\": 0.04093793981266236\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.42592592592592593,\n \"acc_stderr\": 0.02546714904546955,\n \"acc_norm\": 0.42592592592592593,\n \"acc_norm_stderr\": 0.02546714904546955\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5079365079365079,\n \"acc_stderr\": 0.044715725362943486,\n \"acc_norm\": 0.5079365079365079,\n \"acc_norm_stderr\": 0.044715725362943486\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7903225806451613,\n \"acc_stderr\": 0.02315787934908353,\n \"acc_norm\": 0.7903225806451613,\n \"acc_norm_stderr\": 0.02315787934908353\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4975369458128079,\n \"acc_stderr\": 0.03517945038691063,\n \"acc_norm\": 0.4975369458128079,\n \"acc_norm_stderr\": 0.03517945038691063\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7818181818181819,\n \"acc_stderr\": 0.03225078108306289,\n \"acc_norm\": 0.7818181818181819,\n \"acc_norm_stderr\": 0.03225078108306289\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7828282828282829,\n \"acc_stderr\": 0.02937661648494562,\n \"acc_norm\": 0.7828282828282829,\n \"acc_norm_stderr\": 0.02937661648494562\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8911917098445595,\n \"acc_stderr\": 0.022473253332768763,\n \"acc_norm\": 0.8911917098445595,\n \"acc_norm_stderr\": 0.022473253332768763\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6641025641025641,\n \"acc_stderr\": 0.023946724741563973,\n \"acc_norm\": 0.6641025641025641,\n \"acc_norm_stderr\": 0.023946724741563973\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3592592592592593,\n \"acc_stderr\": 0.029252905927251972,\n \"acc_norm\": 0.3592592592592593,\n \"acc_norm_stderr\": 0.029252905927251972\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6848739495798319,\n \"acc_stderr\": 0.030176808288974337,\n \"acc_norm\": 0.6848739495798319,\n \"acc_norm_stderr\": 0.030176808288974337\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8495412844036697,\n \"acc_stderr\": 0.015328563932669237,\n \"acc_norm\": 0.8495412844036697,\n \"acc_norm_stderr\": 0.015328563932669237\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.49537037037037035,\n \"acc_stderr\": 0.03409825519163572,\n \"acc_norm\": 0.49537037037037035,\n \"acc_norm_stderr\": 0.03409825519163572\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8284313725490197,\n \"acc_stderr\": 0.026460569561240644,\n \"acc_norm\": 0.8284313725490197,\n \"acc_norm_stderr\": 0.026460569561240644\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8185654008438819,\n \"acc_stderr\": 0.025085961144579647,\n \"acc_norm\": 0.8185654008438819,\n \"acc_norm_stderr\": 0.025085961144579647\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7130044843049327,\n \"acc_stderr\": 0.030360379710291943,\n \"acc_norm\": 0.7130044843049327,\n \"acc_norm_stderr\": 0.030360379710291943\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7709923664122137,\n \"acc_stderr\": 0.036853466317118506,\n \"acc_norm\": 0.7709923664122137,\n \"acc_norm_stderr\": 0.036853466317118506\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098824,\n \"acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098824\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7592592592592593,\n \"acc_stderr\": 0.04133119440243839,\n \"acc_norm\": 0.7592592592592593,\n \"acc_norm_stderr\": 0.04133119440243839\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615769,\n \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615769\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4642857142857143,\n \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.4642857142857143,\n \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8252427184466019,\n \"acc_stderr\": 0.03760178006026621,\n \"acc_norm\": 0.8252427184466019,\n \"acc_norm_stderr\": 0.03760178006026621\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8931623931623932,\n \"acc_stderr\": 0.02023714900899093,\n \"acc_norm\": 0.8931623931623932,\n \"acc_norm_stderr\": 0.02023714900899093\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.79,\n \"acc_stderr\": 0.04093601807403326,\n \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.04093601807403326\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8365261813537676,\n \"acc_stderr\": 0.013223928616741626,\n \"acc_norm\": 0.8365261813537676,\n \"acc_norm_stderr\": 0.013223928616741626\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7601156069364162,\n \"acc_stderr\": 0.022989592543123563,\n \"acc_norm\": 0.7601156069364162,\n \"acc_norm_stderr\": 0.022989592543123563\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2424581005586592,\n \"acc_stderr\": 0.014333522059217889,\n \"acc_norm\": 0.2424581005586592,\n \"acc_norm_stderr\": 0.014333522059217889\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.761437908496732,\n \"acc_stderr\": 0.02440439492808787,\n \"acc_norm\": 0.761437908496732,\n \"acc_norm_stderr\": 0.02440439492808787\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.707395498392283,\n \"acc_stderr\": 0.02583989833487798,\n \"acc_norm\": 0.707395498392283,\n \"acc_norm_stderr\": 0.02583989833487798\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7623456790123457,\n \"acc_stderr\": 0.023683591837008557,\n \"acc_norm\": 0.7623456790123457,\n \"acc_norm_stderr\": 0.023683591837008557\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.48936170212765956,\n \"acc_stderr\": 0.029820747191422473,\n \"acc_norm\": 0.48936170212765956,\n \"acc_norm_stderr\": 0.029820747191422473\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4869621903520209,\n \"acc_stderr\": 0.012765893883835332,\n \"acc_norm\": 0.4869621903520209,\n \"acc_norm_stderr\": 0.012765893883835332\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7352941176470589,\n \"acc_stderr\": 0.02679956202488766,\n \"acc_norm\": 0.7352941176470589,\n \"acc_norm_stderr\": 0.02679956202488766\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6748366013071896,\n \"acc_stderr\": 0.01895088677080631,\n \"acc_norm\": 0.6748366013071896,\n \"acc_norm_stderr\": 0.01895088677080631\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.746938775510204,\n \"acc_stderr\": 0.027833023871399673,\n \"acc_norm\": 0.746938775510204,\n \"acc_norm_stderr\": 0.027833023871399673\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.845771144278607,\n \"acc_stderr\": 0.025538433368578334,\n \"acc_norm\": 0.845771144278607,\n \"acc_norm_stderr\": 0.025538433368578334\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.86,\n \"acc_stderr\": 0.03487350880197768,\n \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.03487350880197768\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5240963855421686,\n \"acc_stderr\": 0.03887971849597264,\n \"acc_norm\": 0.5240963855421686,\n \"acc_norm_stderr\": 0.03887971849597264\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8245614035087719,\n \"acc_stderr\": 0.02917088550072767,\n \"acc_norm\": 0.8245614035087719,\n \"acc_norm_stderr\": 0.02917088550072767\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.35862913096695226,\n \"mc1_stderr\": 0.016789289499502022,\n \"mc2\": 0.5189602568049447,\n \"mc2_stderr\": 0.015303685990455876\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8176795580110497,\n \"acc_stderr\": 0.010851565594267195\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6815769522365428,\n \"acc_stderr\": 0.01283222572307541\n }\n}\n```", "repo_url": "https://huggingface.co/Jaume/openchat-3.5-0106-mod-gpt5", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|arc:challenge|25_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|arc:challenge|25_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|gsm8k|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|gsm8k|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hellaswag|10_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hellaswag|10_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T20-41-18.617914.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T21-01-35.974498.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["**/details_harness|winogrande|5_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["**/details_harness|winogrande|5_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-14T21-01-35.974498.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_14T20_41_18.617914", "path": ["results_2024-01-14T20-41-18.617914.parquet"]}, {"split": "2024_01_14T21_01_35.974498", "path": ["results_2024-01-14T21-01-35.974498.parquet"]}, {"split": "latest", "path": ["results_2024-01-14T21-01-35.974498.parquet"]}]}]} | 2024-01-14T21:04:15+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Jaume/openchat-3.5-0106-mod-gpt5
Dataset automatically created during the evaluation run of model Jaume/openchat-3.5-0106-mod-gpt5 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-14T21:01:35.974498(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of Jaume/openchat-3.5-0106-mod-gpt5\n\n\n\nDataset automatically created during the evaluation run of model Jaume/openchat-3.5-0106-mod-gpt5 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T21:01:35.974498(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Jaume/openchat-3.5-0106-mod-gpt5\n\n\n\nDataset automatically created during the evaluation run of model Jaume/openchat-3.5-0106-mod-gpt5 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T21:01:35.974498(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
8515cfcc43a1e5242d97d0b006fa9d0a5f61ddd2 | # Dataset Card for "autotrain-data-autotrain-jose-antorcha-22"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | pedromigurasdev/autotrain-data-autotrain-jose-antorcha-22 | [
"region:us"
] | 2024-01-14T21:14:16+00:00 | {"dataset_info": {"features": [{"name": "autotrain_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 555000, "num_examples": 840}, {"name": "validation", "num_bytes": 555000, "num_examples": 840}], "download_size": 84992, "dataset_size": 1110000}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]} | 2024-01-14T21:14:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "autotrain-data-autotrain-jose-antorcha-22"
More Information needed | [
"# Dataset Card for \"autotrain-data-autotrain-jose-antorcha-22\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"autotrain-data-autotrain-jose-antorcha-22\"\n\nMore Information needed"
] |
051d37a3a27c981bb3a18ae37fe0ebcc20e8c48d |
# Dataset Card for Evaluation run of NovoCode/Novocode7b
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [NovoCode/Novocode7b](https://huggingface.co/NovoCode/Novocode7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_NovoCode__Novocode7b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-23T01:09:59.087164](https://huggingface.co/datasets/open-llm-leaderboard/details_NovoCode__Novocode7b/blob/main/results_2024-01-23T01-09-59.087164.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5637380070206868,
"acc_stderr": 0.03397699301826096,
"acc_norm": 0.5694898071045811,
"acc_norm_stderr": 0.03471749621521052,
"mc1": 0.4663402692778458,
"mc1_stderr": 0.017463793867168106,
"mc2": 0.6276801807189292,
"mc2_stderr": 0.015415755094430335
},
"harness|arc:challenge|25": {
"acc": 0.5477815699658704,
"acc_stderr": 0.01454451988063383,
"acc_norm": 0.5878839590443686,
"acc_norm_stderr": 0.014383915302225403
},
"harness|hellaswag|10": {
"acc": 0.6214897430790679,
"acc_stderr": 0.004840244782805302,
"acc_norm": 0.8051185022903804,
"acc_norm_stderr": 0.003952999181084448
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.4962962962962963,
"acc_stderr": 0.04319223625811331,
"acc_norm": 0.4962962962962963,
"acc_norm_stderr": 0.04319223625811331
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5526315789473685,
"acc_stderr": 0.04046336883978251,
"acc_norm": 0.5526315789473685,
"acc_norm_stderr": 0.04046336883978251
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6339622641509434,
"acc_stderr": 0.02964781353936525,
"acc_norm": 0.6339622641509434,
"acc_norm_stderr": 0.02964781353936525
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.5972222222222222,
"acc_stderr": 0.04101405519842426,
"acc_norm": 0.5972222222222222,
"acc_norm_stderr": 0.04101405519842426
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956913,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956913
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.43,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.43,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5433526011560693,
"acc_stderr": 0.03798106566014498,
"acc_norm": 0.5433526011560693,
"acc_norm_stderr": 0.03798106566014498
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.43137254901960786,
"acc_stderr": 0.04928099597287534,
"acc_norm": 0.43137254901960786,
"acc_norm_stderr": 0.04928099597287534
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.65,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.65,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5063829787234042,
"acc_stderr": 0.032683358999363366,
"acc_norm": 0.5063829787234042,
"acc_norm_stderr": 0.032683358999363366
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.41228070175438597,
"acc_stderr": 0.04630653203366595,
"acc_norm": 0.41228070175438597,
"acc_norm_stderr": 0.04630653203366595
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5379310344827586,
"acc_stderr": 0.04154659671707548,
"acc_norm": 0.5379310344827586,
"acc_norm_stderr": 0.04154659671707548
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.32275132275132273,
"acc_stderr": 0.024078943243597016,
"acc_norm": 0.32275132275132273,
"acc_norm_stderr": 0.024078943243597016
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3888888888888889,
"acc_stderr": 0.04360314860077459,
"acc_norm": 0.3888888888888889,
"acc_norm_stderr": 0.04360314860077459
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6451612903225806,
"acc_stderr": 0.027218889773308753,
"acc_norm": 0.6451612903225806,
"acc_norm_stderr": 0.027218889773308753
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.03481904844438803,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.03481904844438803
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6303030303030303,
"acc_stderr": 0.03769430314512567,
"acc_norm": 0.6303030303030303,
"acc_norm_stderr": 0.03769430314512567
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7272727272727273,
"acc_stderr": 0.03173071239071724,
"acc_norm": 0.7272727272727273,
"acc_norm_stderr": 0.03173071239071724
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.7875647668393783,
"acc_stderr": 0.02951928261681723,
"acc_norm": 0.7875647668393783,
"acc_norm_stderr": 0.02951928261681723
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.541025641025641,
"acc_stderr": 0.025265525491284295,
"acc_norm": 0.541025641025641,
"acc_norm_stderr": 0.025265525491284295
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3296296296296296,
"acc_stderr": 0.028661201116524575,
"acc_norm": 0.3296296296296296,
"acc_norm_stderr": 0.028661201116524575
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.5462184873949579,
"acc_stderr": 0.03233943468182088,
"acc_norm": 0.5462184873949579,
"acc_norm_stderr": 0.03233943468182088
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3443708609271523,
"acc_stderr": 0.038796870240733264,
"acc_norm": 0.3443708609271523,
"acc_norm_stderr": 0.038796870240733264
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7504587155963303,
"acc_stderr": 0.018553897629501628,
"acc_norm": 0.7504587155963303,
"acc_norm_stderr": 0.018553897629501628
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4166666666666667,
"acc_stderr": 0.03362277436608044,
"acc_norm": 0.4166666666666667,
"acc_norm_stderr": 0.03362277436608044
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7156862745098039,
"acc_stderr": 0.03166009679399814,
"acc_norm": 0.7156862745098039,
"acc_norm_stderr": 0.03166009679399814
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.679324894514768,
"acc_stderr": 0.030381931949990403,
"acc_norm": 0.679324894514768,
"acc_norm_stderr": 0.030381931949990403
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6502242152466368,
"acc_stderr": 0.03200736719484503,
"acc_norm": 0.6502242152466368,
"acc_norm_stderr": 0.03200736719484503
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6412213740458015,
"acc_stderr": 0.04206739313864908,
"acc_norm": 0.6412213740458015,
"acc_norm_stderr": 0.04206739313864908
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7024793388429752,
"acc_stderr": 0.04173349148083499,
"acc_norm": 0.7024793388429752,
"acc_norm_stderr": 0.04173349148083499
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.6574074074074074,
"acc_stderr": 0.0458790474130181,
"acc_norm": 0.6574074074074074,
"acc_norm_stderr": 0.0458790474130181
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7116564417177914,
"acc_stderr": 0.03559039531617342,
"acc_norm": 0.7116564417177914,
"acc_norm_stderr": 0.03559039531617342
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5,
"acc_stderr": 0.04745789978762494,
"acc_norm": 0.5,
"acc_norm_stderr": 0.04745789978762494
},
"harness|hendrycksTest-management|5": {
"acc": 0.7087378640776699,
"acc_stderr": 0.044986763205729224,
"acc_norm": 0.7087378640776699,
"acc_norm_stderr": 0.044986763205729224
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8376068376068376,
"acc_stderr": 0.02416161812798774,
"acc_norm": 0.8376068376068376,
"acc_norm_stderr": 0.02416161812798774
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.65,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.65,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7509578544061303,
"acc_stderr": 0.015464676163395965,
"acc_norm": 0.7509578544061303,
"acc_norm_stderr": 0.015464676163395965
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6098265895953757,
"acc_stderr": 0.026261677607806642,
"acc_norm": 0.6098265895953757,
"acc_norm_stderr": 0.026261677607806642
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.37988826815642457,
"acc_stderr": 0.016232826818678513,
"acc_norm": 0.37988826815642457,
"acc_norm_stderr": 0.016232826818678513
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6045751633986928,
"acc_stderr": 0.027996723180631462,
"acc_norm": 0.6045751633986928,
"acc_norm_stderr": 0.027996723180631462
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6334405144694534,
"acc_stderr": 0.02736807824397165,
"acc_norm": 0.6334405144694534,
"acc_norm_stderr": 0.02736807824397165
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.5925925925925926,
"acc_stderr": 0.027339546640662737,
"acc_norm": 0.5925925925925926,
"acc_norm_stderr": 0.027339546640662737
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4078014184397163,
"acc_stderr": 0.02931601177634356,
"acc_norm": 0.4078014184397163,
"acc_norm_stderr": 0.02931601177634356
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3859191655801825,
"acc_stderr": 0.012433398911476143,
"acc_norm": 0.3859191655801825,
"acc_norm_stderr": 0.012433398911476143
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5367647058823529,
"acc_stderr": 0.03029061918048569,
"acc_norm": 0.5367647058823529,
"acc_norm_stderr": 0.03029061918048569
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5245098039215687,
"acc_stderr": 0.02020351728026144,
"acc_norm": 0.5245098039215687,
"acc_norm_stderr": 0.02020351728026144
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6181818181818182,
"acc_stderr": 0.046534298079135075,
"acc_norm": 0.6181818181818182,
"acc_norm_stderr": 0.046534298079135075
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.5877551020408164,
"acc_stderr": 0.03151236044674268,
"acc_norm": 0.5877551020408164,
"acc_norm_stderr": 0.03151236044674268
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8308457711442786,
"acc_stderr": 0.02650859065623326,
"acc_norm": 0.8308457711442786,
"acc_norm_stderr": 0.02650859065623326
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.79,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.79,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-virology|5": {
"acc": 0.463855421686747,
"acc_stderr": 0.03882310850890593,
"acc_norm": 0.463855421686747,
"acc_norm_stderr": 0.03882310850890593
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7953216374269005,
"acc_stderr": 0.030944459778533193,
"acc_norm": 0.7953216374269005,
"acc_norm_stderr": 0.030944459778533193
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4663402692778458,
"mc1_stderr": 0.017463793867168106,
"mc2": 0.6276801807189292,
"mc2_stderr": 0.015415755094430335
},
"harness|winogrande|5": {
"acc": 0.7813733228097869,
"acc_stderr": 0.011616198215773218
},
"harness|gsm8k|5": {
"acc": 0.2304776345716452,
"acc_stderr": 0.011600249020595822
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_NovoCode__Novocode7b | [
"region:us"
] | 2024-01-14T21:22:48+00:00 | {"pretty_name": "Evaluation run of NovoCode/Novocode7b", "dataset_summary": "Dataset automatically created during the evaluation run of model [NovoCode/Novocode7b](https://huggingface.co/NovoCode/Novocode7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_NovoCode__Novocode7b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-23T01:09:59.087164](https://huggingface.co/datasets/open-llm-leaderboard/details_NovoCode__Novocode7b/blob/main/results_2024-01-23T01-09-59.087164.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5637380070206868,\n \"acc_stderr\": 0.03397699301826096,\n \"acc_norm\": 0.5694898071045811,\n \"acc_norm_stderr\": 0.03471749621521052,\n \"mc1\": 0.4663402692778458,\n \"mc1_stderr\": 0.017463793867168106,\n \"mc2\": 0.6276801807189292,\n \"mc2_stderr\": 0.015415755094430335\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5477815699658704,\n \"acc_stderr\": 0.01454451988063383,\n \"acc_norm\": 0.5878839590443686,\n \"acc_norm_stderr\": 0.014383915302225403\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6214897430790679,\n \"acc_stderr\": 0.004840244782805302,\n \"acc_norm\": 0.8051185022903804,\n \"acc_norm_stderr\": 0.003952999181084448\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4962962962962963,\n \"acc_stderr\": 0.04319223625811331,\n \"acc_norm\": 0.4962962962962963,\n \"acc_norm_stderr\": 0.04319223625811331\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.5526315789473685,\n \"acc_stderr\": 0.04046336883978251,\n \"acc_norm\": 0.5526315789473685,\n \"acc_norm_stderr\": 0.04046336883978251\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6339622641509434,\n \"acc_stderr\": 0.02964781353936525,\n \"acc_norm\": 0.6339622641509434,\n \"acc_norm_stderr\": 0.02964781353936525\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5972222222222222,\n \"acc_stderr\": 0.04101405519842426,\n \"acc_norm\": 0.5972222222222222,\n \"acc_norm_stderr\": 0.04101405519842426\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956913,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956913\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5433526011560693,\n \"acc_stderr\": 0.03798106566014498,\n \"acc_norm\": 0.5433526011560693,\n \"acc_norm_stderr\": 0.03798106566014498\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287534,\n \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287534\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5063829787234042,\n \"acc_stderr\": 0.032683358999363366,\n \"acc_norm\": 0.5063829787234042,\n \"acc_norm_stderr\": 0.032683358999363366\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.41228070175438597,\n \"acc_stderr\": 0.04630653203366595,\n \"acc_norm\": 0.41228070175438597,\n \"acc_norm_stderr\": 0.04630653203366595\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5379310344827586,\n \"acc_stderr\": 0.04154659671707548,\n \"acc_norm\": 0.5379310344827586,\n \"acc_norm_stderr\": 0.04154659671707548\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.32275132275132273,\n \"acc_stderr\": 0.024078943243597016,\n \"acc_norm\": 0.32275132275132273,\n \"acc_norm_stderr\": 0.024078943243597016\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3888888888888889,\n \"acc_stderr\": 0.04360314860077459,\n \"acc_norm\": 0.3888888888888889,\n \"acc_norm_stderr\": 0.04360314860077459\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6451612903225806,\n \"acc_stderr\": 0.027218889773308753,\n \"acc_norm\": 0.6451612903225806,\n \"acc_norm_stderr\": 0.027218889773308753\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.42857142857142855,\n \"acc_stderr\": 0.03481904844438803,\n \"acc_norm\": 0.42857142857142855,\n \"acc_norm_stderr\": 0.03481904844438803\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.6303030303030303,\n \"acc_stderr\": 0.03769430314512567,\n \"acc_norm\": 0.6303030303030303,\n \"acc_norm_stderr\": 0.03769430314512567\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7272727272727273,\n \"acc_stderr\": 0.03173071239071724,\n \"acc_norm\": 0.7272727272727273,\n \"acc_norm_stderr\": 0.03173071239071724\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.7875647668393783,\n \"acc_stderr\": 0.02951928261681723,\n \"acc_norm\": 0.7875647668393783,\n \"acc_norm_stderr\": 0.02951928261681723\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.541025641025641,\n \"acc_stderr\": 0.025265525491284295,\n \"acc_norm\": 0.541025641025641,\n \"acc_norm_stderr\": 0.025265525491284295\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3296296296296296,\n \"acc_stderr\": 0.028661201116524575,\n \"acc_norm\": 0.3296296296296296,\n \"acc_norm_stderr\": 0.028661201116524575\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.5462184873949579,\n \"acc_stderr\": 0.03233943468182088,\n \"acc_norm\": 0.5462184873949579,\n \"acc_norm_stderr\": 0.03233943468182088\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3443708609271523,\n \"acc_stderr\": 0.038796870240733264,\n \"acc_norm\": 0.3443708609271523,\n \"acc_norm_stderr\": 0.038796870240733264\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7504587155963303,\n \"acc_stderr\": 0.018553897629501628,\n \"acc_norm\": 0.7504587155963303,\n \"acc_norm_stderr\": 0.018553897629501628\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4166666666666667,\n \"acc_stderr\": 0.03362277436608044,\n \"acc_norm\": 0.4166666666666667,\n \"acc_norm_stderr\": 0.03362277436608044\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7156862745098039,\n \"acc_stderr\": 0.03166009679399814,\n \"acc_norm\": 0.7156862745098039,\n \"acc_norm_stderr\": 0.03166009679399814\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.679324894514768,\n \"acc_stderr\": 0.030381931949990403,\n \"acc_norm\": 0.679324894514768,\n \"acc_norm_stderr\": 0.030381931949990403\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6502242152466368,\n \"acc_stderr\": 0.03200736719484503,\n \"acc_norm\": 0.6502242152466368,\n \"acc_norm_stderr\": 0.03200736719484503\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.6412213740458015,\n \"acc_stderr\": 0.04206739313864908,\n \"acc_norm\": 0.6412213740458015,\n \"acc_norm_stderr\": 0.04206739313864908\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7024793388429752,\n \"acc_stderr\": 0.04173349148083499,\n \"acc_norm\": 0.7024793388429752,\n \"acc_norm_stderr\": 0.04173349148083499\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6574074074074074,\n \"acc_stderr\": 0.0458790474130181,\n \"acc_norm\": 0.6574074074074074,\n \"acc_norm_stderr\": 0.0458790474130181\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7116564417177914,\n \"acc_stderr\": 0.03559039531617342,\n \"acc_norm\": 0.7116564417177914,\n \"acc_norm_stderr\": 0.03559039531617342\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.04745789978762494,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.04745789978762494\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7087378640776699,\n \"acc_stderr\": 0.044986763205729224,\n \"acc_norm\": 0.7087378640776699,\n \"acc_norm_stderr\": 0.044986763205729224\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8376068376068376,\n \"acc_stderr\": 0.02416161812798774,\n \"acc_norm\": 0.8376068376068376,\n \"acc_norm_stderr\": 0.02416161812798774\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7509578544061303,\n \"acc_stderr\": 0.015464676163395965,\n \"acc_norm\": 0.7509578544061303,\n \"acc_norm_stderr\": 0.015464676163395965\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6098265895953757,\n \"acc_stderr\": 0.026261677607806642,\n \"acc_norm\": 0.6098265895953757,\n \"acc_norm_stderr\": 0.026261677607806642\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.37988826815642457,\n \"acc_stderr\": 0.016232826818678513,\n \"acc_norm\": 0.37988826815642457,\n \"acc_norm_stderr\": 0.016232826818678513\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6045751633986928,\n \"acc_stderr\": 0.027996723180631462,\n \"acc_norm\": 0.6045751633986928,\n \"acc_norm_stderr\": 0.027996723180631462\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6334405144694534,\n \"acc_stderr\": 0.02736807824397165,\n \"acc_norm\": 0.6334405144694534,\n \"acc_norm_stderr\": 0.02736807824397165\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.5925925925925926,\n \"acc_stderr\": 0.027339546640662737,\n \"acc_norm\": 0.5925925925925926,\n \"acc_norm_stderr\": 0.027339546640662737\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4078014184397163,\n \"acc_stderr\": 0.02931601177634356,\n \"acc_norm\": 0.4078014184397163,\n \"acc_norm_stderr\": 0.02931601177634356\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3859191655801825,\n \"acc_stderr\": 0.012433398911476143,\n \"acc_norm\": 0.3859191655801825,\n \"acc_norm_stderr\": 0.012433398911476143\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.5367647058823529,\n \"acc_stderr\": 0.03029061918048569,\n \"acc_norm\": 0.5367647058823529,\n \"acc_norm_stderr\": 0.03029061918048569\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.5245098039215687,\n \"acc_stderr\": 0.02020351728026144,\n \"acc_norm\": 0.5245098039215687,\n \"acc_norm_stderr\": 0.02020351728026144\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6181818181818182,\n \"acc_stderr\": 0.046534298079135075,\n \"acc_norm\": 0.6181818181818182,\n \"acc_norm_stderr\": 0.046534298079135075\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.5877551020408164,\n \"acc_stderr\": 0.03151236044674268,\n \"acc_norm\": 0.5877551020408164,\n \"acc_norm_stderr\": 0.03151236044674268\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8308457711442786,\n \"acc_stderr\": 0.02650859065623326,\n \"acc_norm\": 0.8308457711442786,\n \"acc_norm_stderr\": 0.02650859065623326\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.79,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.463855421686747,\n \"acc_stderr\": 0.03882310850890593,\n \"acc_norm\": 0.463855421686747,\n \"acc_norm_stderr\": 0.03882310850890593\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7953216374269005,\n \"acc_stderr\": 0.030944459778533193,\n \"acc_norm\": 0.7953216374269005,\n \"acc_norm_stderr\": 0.030944459778533193\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4663402692778458,\n \"mc1_stderr\": 0.017463793867168106,\n \"mc2\": 0.6276801807189292,\n \"mc2_stderr\": 0.015415755094430335\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7813733228097869,\n \"acc_stderr\": 0.011616198215773218\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.2304776345716452,\n \"acc_stderr\": 0.011600249020595822\n }\n}\n```", "repo_url": "https://huggingface.co/NovoCode/Novocode7b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|arc:challenge|25_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|arc:challenge|25_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|arc:challenge|25_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|gsm8k|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|gsm8k|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|gsm8k|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hellaswag|10_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hellaswag|10_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hellaswag|10_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T21-20-28.943538.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-23T00-46-49.917108.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-23T01-09-59.087164.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["**/details_harness|winogrande|5_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["**/details_harness|winogrande|5_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["**/details_harness|winogrande|5_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-23T01-09-59.087164.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_14T21_20_28.943538", "path": ["results_2024-01-14T21-20-28.943538.parquet"]}, {"split": "2024_01_23T00_46_49.917108", "path": ["results_2024-01-23T00-46-49.917108.parquet"]}, {"split": "2024_01_23T01_09_59.087164", "path": ["results_2024-01-23T01-09-59.087164.parquet"]}, {"split": "latest", "path": ["results_2024-01-23T01-09-59.087164.parquet"]}]}]} | 2024-01-23T01:12:22+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of NovoCode/Novocode7b
Dataset automatically created during the evaluation run of model NovoCode/Novocode7b on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-23T01:09:59.087164(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of NovoCode/Novocode7b\n\n\n\nDataset automatically created during the evaluation run of model NovoCode/Novocode7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-23T01:09:59.087164(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of NovoCode/Novocode7b\n\n\n\nDataset automatically created during the evaluation run of model NovoCode/Novocode7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-23T01:09:59.087164(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
9e78ee1fa8cd4681c6d4259e3b66ee3ec1cf4c2e |
# Dataset Card for Evaluation run of kz919/mistral-7b-sft-open-orca-flan-50k
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [kz919/mistral-7b-sft-open-orca-flan-50k](https://huggingface.co/kz919/mistral-7b-sft-open-orca-flan-50k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_kz919__mistral-7b-sft-open-orca-flan-50k",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-14T21:25:51.230819](https://huggingface.co/datasets/open-llm-leaderboard/details_kz919__mistral-7b-sft-open-orca-flan-50k/blob/main/results_2024-01-14T21-25-51.230819.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5538213786755696,
"acc_stderr": 0.03369594673096056,
"acc_norm": 0.5621293960309836,
"acc_norm_stderr": 0.03447812044023231,
"mc1": 0.2533659730722154,
"mc1_stderr": 0.01522589934082683,
"mc2": 0.3749461951546611,
"mc2_stderr": 0.014143079789920542
},
"harness|arc:challenge|25": {
"acc": 0.5255972696245734,
"acc_stderr": 0.014592230885298964,
"acc_norm": 0.5878839590443686,
"acc_norm_stderr": 0.014383915302225403
},
"harness|hellaswag|10": {
"acc": 0.6160127464648476,
"acc_stderr": 0.004853608805843885,
"acc_norm": 0.8191595299741088,
"acc_norm_stderr": 0.0038409935166272657
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6,
"acc_stderr": 0.04232073695151589,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04232073695151589
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5723684210526315,
"acc_stderr": 0.04026097083296564,
"acc_norm": 0.5723684210526315,
"acc_norm_stderr": 0.04026097083296564
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6075471698113207,
"acc_stderr": 0.03005258057955784,
"acc_norm": 0.6075471698113207,
"acc_norm_stderr": 0.03005258057955784
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6180555555555556,
"acc_stderr": 0.040629907841466674,
"acc_norm": 0.6180555555555556,
"acc_norm_stderr": 0.040629907841466674
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.4,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.4,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.45,
"acc_stderr": 0.049999999999999996,
"acc_norm": 0.45,
"acc_norm_stderr": 0.049999999999999996
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5780346820809249,
"acc_stderr": 0.03765746693865149,
"acc_norm": 0.5780346820809249,
"acc_norm_stderr": 0.03765746693865149
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.39215686274509803,
"acc_stderr": 0.04858083574266345,
"acc_norm": 0.39215686274509803,
"acc_norm_stderr": 0.04858083574266345
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.0440844002276808,
"acc_norm": 0.74,
"acc_norm_stderr": 0.0440844002276808
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.502127659574468,
"acc_stderr": 0.03268572658667492,
"acc_norm": 0.502127659574468,
"acc_norm_stderr": 0.03268572658667492
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.42105263157894735,
"acc_stderr": 0.046446020912223177,
"acc_norm": 0.42105263157894735,
"acc_norm_stderr": 0.046446020912223177
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.496551724137931,
"acc_stderr": 0.041665675771015785,
"acc_norm": 0.496551724137931,
"acc_norm_stderr": 0.041665675771015785
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.38095238095238093,
"acc_stderr": 0.025010749116137595,
"acc_norm": 0.38095238095238093,
"acc_norm_stderr": 0.025010749116137595
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.2698412698412698,
"acc_stderr": 0.03970158273235173,
"acc_norm": 0.2698412698412698,
"acc_norm_stderr": 0.03970158273235173
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.28,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.28,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6451612903225806,
"acc_stderr": 0.02721888977330876,
"acc_norm": 0.6451612903225806,
"acc_norm_stderr": 0.02721888977330876
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.458128078817734,
"acc_stderr": 0.03505630140785741,
"acc_norm": 0.458128078817734,
"acc_norm_stderr": 0.03505630140785741
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.036639749943912434,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.036639749943912434
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7323232323232324,
"acc_stderr": 0.03154449888270286,
"acc_norm": 0.7323232323232324,
"acc_norm_stderr": 0.03154449888270286
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.7512953367875648,
"acc_stderr": 0.03119584087770029,
"acc_norm": 0.7512953367875648,
"acc_norm_stderr": 0.03119584087770029
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5897435897435898,
"acc_stderr": 0.024939313906940794,
"acc_norm": 0.5897435897435898,
"acc_norm_stderr": 0.024939313906940794
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2111111111111111,
"acc_stderr": 0.02488211685765508,
"acc_norm": 0.2111111111111111,
"acc_norm_stderr": 0.02488211685765508
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.5588235294117647,
"acc_stderr": 0.032252942323996406,
"acc_norm": 0.5588235294117647,
"acc_norm_stderr": 0.032252942323996406
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.26490066225165565,
"acc_stderr": 0.03603038545360384,
"acc_norm": 0.26490066225165565,
"acc_norm_stderr": 0.03603038545360384
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7302752293577982,
"acc_stderr": 0.01902848671111544,
"acc_norm": 0.7302752293577982,
"acc_norm_stderr": 0.01902848671111544
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.38425925925925924,
"acc_stderr": 0.03317354514310742,
"acc_norm": 0.38425925925925924,
"acc_norm_stderr": 0.03317354514310742
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.032834720561085606,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.032834720561085606
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.6582278481012658,
"acc_stderr": 0.030874537537553617,
"acc_norm": 0.6582278481012658,
"acc_norm_stderr": 0.030874537537553617
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6233183856502242,
"acc_stderr": 0.032521134899291884,
"acc_norm": 0.6233183856502242,
"acc_norm_stderr": 0.032521134899291884
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.648854961832061,
"acc_stderr": 0.04186445163013751,
"acc_norm": 0.648854961832061,
"acc_norm_stderr": 0.04186445163013751
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.743801652892562,
"acc_stderr": 0.03984979653302872,
"acc_norm": 0.743801652892562,
"acc_norm_stderr": 0.03984979653302872
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.6388888888888888,
"acc_stderr": 0.04643454608906275,
"acc_norm": 0.6388888888888888,
"acc_norm_stderr": 0.04643454608906275
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6441717791411042,
"acc_stderr": 0.03761521380046734,
"acc_norm": 0.6441717791411042,
"acc_norm_stderr": 0.03761521380046734
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.38392857142857145,
"acc_stderr": 0.04616143075028547,
"acc_norm": 0.38392857142857145,
"acc_norm_stderr": 0.04616143075028547
},
"harness|hendrycksTest-management|5": {
"acc": 0.6893203883495146,
"acc_stderr": 0.045821241601615506,
"acc_norm": 0.6893203883495146,
"acc_norm_stderr": 0.045821241601615506
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.7948717948717948,
"acc_stderr": 0.026453508054040332,
"acc_norm": 0.7948717948717948,
"acc_norm_stderr": 0.026453508054040332
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7637292464878672,
"acc_stderr": 0.015190473717037497,
"acc_norm": 0.7637292464878672,
"acc_norm_stderr": 0.015190473717037497
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6502890173410405,
"acc_stderr": 0.025674281456531018,
"acc_norm": 0.6502890173410405,
"acc_norm_stderr": 0.025674281456531018
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2636871508379888,
"acc_stderr": 0.01473692638376196,
"acc_norm": 0.2636871508379888,
"acc_norm_stderr": 0.01473692638376196
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.5816993464052288,
"acc_stderr": 0.028245134024387292,
"acc_norm": 0.5816993464052288,
"acc_norm_stderr": 0.028245134024387292
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6688102893890675,
"acc_stderr": 0.026730620728004903,
"acc_norm": 0.6688102893890675,
"acc_norm_stderr": 0.026730620728004903
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6728395061728395,
"acc_stderr": 0.026105673861409825,
"acc_norm": 0.6728395061728395,
"acc_norm_stderr": 0.026105673861409825
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.40070921985815605,
"acc_stderr": 0.029233465745573096,
"acc_norm": 0.40070921985815605,
"acc_norm_stderr": 0.029233465745573096
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3878748370273794,
"acc_stderr": 0.012444998309675609,
"acc_norm": 0.3878748370273794,
"acc_norm_stderr": 0.012444998309675609
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5147058823529411,
"acc_stderr": 0.03035969707904612,
"acc_norm": 0.5147058823529411,
"acc_norm_stderr": 0.03035969707904612
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5898692810457516,
"acc_stderr": 0.019898412717635913,
"acc_norm": 0.5898692810457516,
"acc_norm_stderr": 0.019898412717635913
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302505,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302505
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6,
"acc_stderr": 0.031362502409358936,
"acc_norm": 0.6,
"acc_norm_stderr": 0.031362502409358936
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7810945273631841,
"acc_stderr": 0.029239174636647,
"acc_norm": 0.7810945273631841,
"acc_norm_stderr": 0.029239174636647
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932263,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932263
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5301204819277109,
"acc_stderr": 0.03885425420866767,
"acc_norm": 0.5301204819277109,
"acc_norm_stderr": 0.03885425420866767
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7602339181286549,
"acc_stderr": 0.032744852119469564,
"acc_norm": 0.7602339181286549,
"acc_norm_stderr": 0.032744852119469564
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2533659730722154,
"mc1_stderr": 0.01522589934082683,
"mc2": 0.3749461951546611,
"mc2_stderr": 0.014143079789920542
},
"harness|winogrande|5": {
"acc": 0.7797947908445146,
"acc_stderr": 0.011646276755089684
},
"harness|gsm8k|5": {
"acc": 0.10310841546626232,
"acc_stderr": 0.008376436987507795
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_kz919__mistral-7b-sft-open-orca-flan-50k | [
"region:us"
] | 2024-01-14T21:28:10+00:00 | {"pretty_name": "Evaluation run of kz919/mistral-7b-sft-open-orca-flan-50k", "dataset_summary": "Dataset automatically created during the evaluation run of model [kz919/mistral-7b-sft-open-orca-flan-50k](https://huggingface.co/kz919/mistral-7b-sft-open-orca-flan-50k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_kz919__mistral-7b-sft-open-orca-flan-50k\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-14T21:25:51.230819](https://huggingface.co/datasets/open-llm-leaderboard/details_kz919__mistral-7b-sft-open-orca-flan-50k/blob/main/results_2024-01-14T21-25-51.230819.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5538213786755696,\n \"acc_stderr\": 0.03369594673096056,\n \"acc_norm\": 0.5621293960309836,\n \"acc_norm_stderr\": 0.03447812044023231,\n \"mc1\": 0.2533659730722154,\n \"mc1_stderr\": 0.01522589934082683,\n \"mc2\": 0.3749461951546611,\n \"mc2_stderr\": 0.014143079789920542\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5255972696245734,\n \"acc_stderr\": 0.014592230885298964,\n \"acc_norm\": 0.5878839590443686,\n \"acc_norm_stderr\": 0.014383915302225403\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6160127464648476,\n \"acc_stderr\": 0.004853608805843885,\n \"acc_norm\": 0.8191595299741088,\n \"acc_norm_stderr\": 0.0038409935166272657\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.04232073695151589,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.04232073695151589\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.5723684210526315,\n \"acc_stderr\": 0.04026097083296564,\n \"acc_norm\": 0.5723684210526315,\n \"acc_norm_stderr\": 0.04026097083296564\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6075471698113207,\n \"acc_stderr\": 0.03005258057955784,\n \"acc_norm\": 0.6075471698113207,\n \"acc_norm_stderr\": 0.03005258057955784\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6180555555555556,\n \"acc_stderr\": 0.040629907841466674,\n \"acc_norm\": 0.6180555555555556,\n \"acc_norm_stderr\": 0.040629907841466674\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.45,\n \"acc_stderr\": 0.049999999999999996,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.049999999999999996\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5780346820809249,\n \"acc_stderr\": 0.03765746693865149,\n \"acc_norm\": 0.5780346820809249,\n \"acc_norm_stderr\": 0.03765746693865149\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.39215686274509803,\n \"acc_stderr\": 0.04858083574266345,\n \"acc_norm\": 0.39215686274509803,\n \"acc_norm_stderr\": 0.04858083574266345\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.502127659574468,\n \"acc_stderr\": 0.03268572658667492,\n \"acc_norm\": 0.502127659574468,\n \"acc_norm_stderr\": 0.03268572658667492\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.42105263157894735,\n \"acc_stderr\": 0.046446020912223177,\n \"acc_norm\": 0.42105263157894735,\n \"acc_norm_stderr\": 0.046446020912223177\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.496551724137931,\n \"acc_stderr\": 0.041665675771015785,\n \"acc_norm\": 0.496551724137931,\n \"acc_norm_stderr\": 0.041665675771015785\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.38095238095238093,\n \"acc_stderr\": 0.025010749116137595,\n \"acc_norm\": 0.38095238095238093,\n \"acc_norm_stderr\": 0.025010749116137595\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.2698412698412698,\n \"acc_stderr\": 0.03970158273235173,\n \"acc_norm\": 0.2698412698412698,\n \"acc_norm_stderr\": 0.03970158273235173\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6451612903225806,\n \"acc_stderr\": 0.02721888977330876,\n \"acc_norm\": 0.6451612903225806,\n \"acc_norm_stderr\": 0.02721888977330876\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.458128078817734,\n \"acc_stderr\": 0.03505630140785741,\n \"acc_norm\": 0.458128078817734,\n \"acc_norm_stderr\": 0.03505630140785741\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.6727272727272727,\n \"acc_stderr\": 0.036639749943912434,\n \"acc_norm\": 0.6727272727272727,\n \"acc_norm_stderr\": 0.036639749943912434\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7323232323232324,\n \"acc_stderr\": 0.03154449888270286,\n \"acc_norm\": 0.7323232323232324,\n \"acc_norm_stderr\": 0.03154449888270286\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.7512953367875648,\n \"acc_stderr\": 0.03119584087770029,\n \"acc_norm\": 0.7512953367875648,\n \"acc_norm_stderr\": 0.03119584087770029\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5897435897435898,\n \"acc_stderr\": 0.024939313906940794,\n \"acc_norm\": 0.5897435897435898,\n \"acc_norm_stderr\": 0.024939313906940794\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.2111111111111111,\n \"acc_stderr\": 0.02488211685765508,\n \"acc_norm\": 0.2111111111111111,\n \"acc_norm_stderr\": 0.02488211685765508\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.5588235294117647,\n \"acc_stderr\": 0.032252942323996406,\n \"acc_norm\": 0.5588235294117647,\n \"acc_norm_stderr\": 0.032252942323996406\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.26490066225165565,\n \"acc_stderr\": 0.03603038545360384,\n \"acc_norm\": 0.26490066225165565,\n \"acc_norm_stderr\": 0.03603038545360384\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7302752293577982,\n \"acc_stderr\": 0.01902848671111544,\n \"acc_norm\": 0.7302752293577982,\n \"acc_norm_stderr\": 0.01902848671111544\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.38425925925925924,\n \"acc_stderr\": 0.03317354514310742,\n \"acc_norm\": 0.38425925925925924,\n \"acc_norm_stderr\": 0.03317354514310742\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.032834720561085606,\n \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.032834720561085606\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.6582278481012658,\n \"acc_stderr\": 0.030874537537553617,\n \"acc_norm\": 0.6582278481012658,\n \"acc_norm_stderr\": 0.030874537537553617\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6233183856502242,\n \"acc_stderr\": 0.032521134899291884,\n \"acc_norm\": 0.6233183856502242,\n \"acc_norm_stderr\": 0.032521134899291884\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.648854961832061,\n \"acc_stderr\": 0.04186445163013751,\n \"acc_norm\": 0.648854961832061,\n \"acc_norm_stderr\": 0.04186445163013751\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.743801652892562,\n \"acc_stderr\": 0.03984979653302872,\n \"acc_norm\": 0.743801652892562,\n \"acc_norm_stderr\": 0.03984979653302872\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6388888888888888,\n \"acc_stderr\": 0.04643454608906275,\n \"acc_norm\": 0.6388888888888888,\n \"acc_norm_stderr\": 0.04643454608906275\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.6441717791411042,\n \"acc_stderr\": 0.03761521380046734,\n \"acc_norm\": 0.6441717791411042,\n \"acc_norm_stderr\": 0.03761521380046734\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.38392857142857145,\n \"acc_stderr\": 0.04616143075028547,\n \"acc_norm\": 0.38392857142857145,\n \"acc_norm_stderr\": 0.04616143075028547\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.6893203883495146,\n \"acc_stderr\": 0.045821241601615506,\n \"acc_norm\": 0.6893203883495146,\n \"acc_norm_stderr\": 0.045821241601615506\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.7948717948717948,\n \"acc_stderr\": 0.026453508054040332,\n \"acc_norm\": 0.7948717948717948,\n \"acc_norm_stderr\": 0.026453508054040332\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7637292464878672,\n \"acc_stderr\": 0.015190473717037497,\n \"acc_norm\": 0.7637292464878672,\n \"acc_norm_stderr\": 0.015190473717037497\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6502890173410405,\n \"acc_stderr\": 0.025674281456531018,\n \"acc_norm\": 0.6502890173410405,\n \"acc_norm_stderr\": 0.025674281456531018\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2636871508379888,\n \"acc_stderr\": 0.01473692638376196,\n \"acc_norm\": 0.2636871508379888,\n \"acc_norm_stderr\": 0.01473692638376196\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.5816993464052288,\n \"acc_stderr\": 0.028245134024387292,\n \"acc_norm\": 0.5816993464052288,\n \"acc_norm_stderr\": 0.028245134024387292\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6688102893890675,\n \"acc_stderr\": 0.026730620728004903,\n \"acc_norm\": 0.6688102893890675,\n \"acc_norm_stderr\": 0.026730620728004903\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6728395061728395,\n \"acc_stderr\": 0.026105673861409825,\n \"acc_norm\": 0.6728395061728395,\n \"acc_norm_stderr\": 0.026105673861409825\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.40070921985815605,\n \"acc_stderr\": 0.029233465745573096,\n \"acc_norm\": 0.40070921985815605,\n \"acc_norm_stderr\": 0.029233465745573096\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3878748370273794,\n \"acc_stderr\": 0.012444998309675609,\n \"acc_norm\": 0.3878748370273794,\n \"acc_norm_stderr\": 0.012444998309675609\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.5147058823529411,\n \"acc_stderr\": 0.03035969707904612,\n \"acc_norm\": 0.5147058823529411,\n \"acc_norm_stderr\": 0.03035969707904612\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.5898692810457516,\n \"acc_stderr\": 0.019898412717635913,\n \"acc_norm\": 0.5898692810457516,\n \"acc_norm_stderr\": 0.019898412717635913\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n \"acc_stderr\": 0.04525393596302505,\n \"acc_norm\": 0.6636363636363637,\n \"acc_norm_stderr\": 0.04525393596302505\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.031362502409358936,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.031362502409358936\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7810945273631841,\n \"acc_stderr\": 0.029239174636647,\n \"acc_norm\": 0.7810945273631841,\n \"acc_norm_stderr\": 0.029239174636647\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932263,\n \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932263\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.5301204819277109,\n \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7602339181286549,\n \"acc_stderr\": 0.032744852119469564,\n \"acc_norm\": 0.7602339181286549,\n \"acc_norm_stderr\": 0.032744852119469564\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2533659730722154,\n \"mc1_stderr\": 0.01522589934082683,\n \"mc2\": 0.3749461951546611,\n \"mc2_stderr\": 0.014143079789920542\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7797947908445146,\n \"acc_stderr\": 0.011646276755089684\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.10310841546626232,\n \"acc_stderr\": 0.008376436987507795\n }\n}\n```", "repo_url": "https://huggingface.co/kz919/mistral-7b-sft-open-orca-flan-50k", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|arc:challenge|25_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|gsm8k|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hellaswag|10_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T21-25-51.230819.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["**/details_harness|winogrande|5_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-14T21-25-51.230819.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_14T21_25_51.230819", "path": ["results_2024-01-14T21-25-51.230819.parquet"]}, {"split": "latest", "path": ["results_2024-01-14T21-25-51.230819.parquet"]}]}]} | 2024-01-14T21:28:31+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of kz919/mistral-7b-sft-open-orca-flan-50k
Dataset automatically created during the evaluation run of model kz919/mistral-7b-sft-open-orca-flan-50k on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-14T21:25:51.230819(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of kz919/mistral-7b-sft-open-orca-flan-50k\n\n\n\nDataset automatically created during the evaluation run of model kz919/mistral-7b-sft-open-orca-flan-50k on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T21:25:51.230819(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of kz919/mistral-7b-sft-open-orca-flan-50k\n\n\n\nDataset automatically created during the evaluation run of model kz919/mistral-7b-sft-open-orca-flan-50k on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T21:25:51.230819(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
98632809254d6f073203049e47e194057d0d0776 |
# Dataset Card for Evaluation run of bhavinjawade/SOLAR-10B-Nector-DPO-Jawade
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [bhavinjawade/SOLAR-10B-Nector-DPO-Jawade](https://huggingface.co/bhavinjawade/SOLAR-10B-Nector-DPO-Jawade) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_bhavinjawade__SOLAR-10B-Nector-DPO-Jawade",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-14T21:40:44.530689](https://huggingface.co/datasets/open-llm-leaderboard/details_bhavinjawade__SOLAR-10B-Nector-DPO-Jawade/blob/main/results_2024-01-14T21-40-44.530689.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6659513885128865,
"acc_stderr": 0.03153636640803569,
"acc_norm": 0.6668604037396749,
"acc_norm_stderr": 0.03217609086906697,
"mc1": 0.5618115055079559,
"mc1_stderr": 0.01736923616440442,
"mc2": 0.7092186670643685,
"mc2_stderr": 0.01520446597729704
},
"harness|arc:challenge|25": {
"acc": 0.6851535836177475,
"acc_stderr": 0.01357265770308495,
"acc_norm": 0.7133105802047781,
"acc_norm_stderr": 0.013214986329274779
},
"harness|hellaswag|10": {
"acc": 0.7124078868751245,
"acc_stderr": 0.0045171484341804905,
"acc_norm": 0.8861780521808404,
"acc_norm_stderr": 0.0031694581233577238
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.45,
"acc_stderr": 0.049999999999999996,
"acc_norm": 0.45,
"acc_norm_stderr": 0.049999999999999996
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6296296296296297,
"acc_stderr": 0.041716541613545426,
"acc_norm": 0.6296296296296297,
"acc_norm_stderr": 0.041716541613545426
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.75,
"acc_stderr": 0.03523807393012047,
"acc_norm": 0.75,
"acc_norm_stderr": 0.03523807393012047
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.73,
"acc_stderr": 0.04461960433384741,
"acc_norm": 0.73,
"acc_norm_stderr": 0.04461960433384741
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6943396226415094,
"acc_stderr": 0.028353298073322666,
"acc_norm": 0.6943396226415094,
"acc_norm_stderr": 0.028353298073322666
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8055555555555556,
"acc_stderr": 0.03309615177059006,
"acc_norm": 0.8055555555555556,
"acc_norm_stderr": 0.03309615177059006
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768078,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768078
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6473988439306358,
"acc_stderr": 0.03643037168958548,
"acc_norm": 0.6473988439306358,
"acc_norm_stderr": 0.03643037168958548
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107223,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107223
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.042923469599092816,
"acc_norm": 0.76,
"acc_norm_stderr": 0.042923469599092816
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6042553191489362,
"acc_stderr": 0.031967586978353627,
"acc_norm": 0.6042553191489362,
"acc_norm_stderr": 0.031967586978353627
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4824561403508772,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.4824561403508772,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6137931034482759,
"acc_stderr": 0.04057324734419036,
"acc_norm": 0.6137931034482759,
"acc_norm_stderr": 0.04057324734419036
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.47619047619047616,
"acc_stderr": 0.025722097064388535,
"acc_norm": 0.47619047619047616,
"acc_norm_stderr": 0.025722097064388535
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.40476190476190477,
"acc_stderr": 0.04390259265377562,
"acc_norm": 0.40476190476190477,
"acc_norm_stderr": 0.04390259265377562
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8193548387096774,
"acc_stderr": 0.021886178567172534,
"acc_norm": 0.8193548387096774,
"acc_norm_stderr": 0.021886178567172534
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.49261083743842365,
"acc_stderr": 0.03517603540361008,
"acc_norm": 0.49261083743842365,
"acc_norm_stderr": 0.03517603540361008
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7878787878787878,
"acc_stderr": 0.03192271569548301,
"acc_norm": 0.7878787878787878,
"acc_norm_stderr": 0.03192271569548301
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8838383838383839,
"acc_stderr": 0.022828881775249377,
"acc_norm": 0.8838383838383839,
"acc_norm_stderr": 0.022828881775249377
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9119170984455959,
"acc_stderr": 0.02045374660160103,
"acc_norm": 0.9119170984455959,
"acc_norm_stderr": 0.02045374660160103
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6717948717948717,
"acc_stderr": 0.023807633198657262,
"acc_norm": 0.6717948717948717,
"acc_norm_stderr": 0.023807633198657262
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3814814814814815,
"acc_stderr": 0.029616718927497593,
"acc_norm": 0.3814814814814815,
"acc_norm_stderr": 0.029616718927497593
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7016806722689075,
"acc_stderr": 0.029719142876342853,
"acc_norm": 0.7016806722689075,
"acc_norm_stderr": 0.029719142876342853
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3576158940397351,
"acc_stderr": 0.03913453431177258,
"acc_norm": 0.3576158940397351,
"acc_norm_stderr": 0.03913453431177258
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8477064220183487,
"acc_stderr": 0.015405084393157074,
"acc_norm": 0.8477064220183487,
"acc_norm_stderr": 0.015405084393157074
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5462962962962963,
"acc_stderr": 0.033953227263757976,
"acc_norm": 0.5462962962962963,
"acc_norm_stderr": 0.033953227263757976
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8578431372549019,
"acc_stderr": 0.024509803921568617,
"acc_norm": 0.8578431372549019,
"acc_norm_stderr": 0.024509803921568617
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8649789029535865,
"acc_stderr": 0.022245776632003694,
"acc_norm": 0.8649789029535865,
"acc_norm_stderr": 0.022245776632003694
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6860986547085202,
"acc_stderr": 0.03114679648297246,
"acc_norm": 0.6860986547085202,
"acc_norm_stderr": 0.03114679648297246
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7404580152671756,
"acc_stderr": 0.03844876139785271,
"acc_norm": 0.7404580152671756,
"acc_norm_stderr": 0.03844876139785271
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8099173553719008,
"acc_stderr": 0.03581796951709282,
"acc_norm": 0.8099173553719008,
"acc_norm_stderr": 0.03581796951709282
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7962962962962963,
"acc_stderr": 0.03893542518824847,
"acc_norm": 0.7962962962962963,
"acc_norm_stderr": 0.03893542518824847
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7239263803680982,
"acc_stderr": 0.035123852837050475,
"acc_norm": 0.7239263803680982,
"acc_norm_stderr": 0.035123852837050475
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4642857142857143,
"acc_stderr": 0.04733667890053756,
"acc_norm": 0.4642857142857143,
"acc_norm_stderr": 0.04733667890053756
},
"harness|hendrycksTest-management|5": {
"acc": 0.8155339805825242,
"acc_stderr": 0.03840423627288276,
"acc_norm": 0.8155339805825242,
"acc_norm_stderr": 0.03840423627288276
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8589743589743589,
"acc_stderr": 0.022801382534597524,
"acc_norm": 0.8589743589743589,
"acc_norm_stderr": 0.022801382534597524
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8020434227330779,
"acc_stderr": 0.014248873549217575,
"acc_norm": 0.8020434227330779,
"acc_norm_stderr": 0.014248873549217575
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7485549132947977,
"acc_stderr": 0.023357365785874037,
"acc_norm": 0.7485549132947977,
"acc_norm_stderr": 0.023357365785874037
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.36201117318435755,
"acc_stderr": 0.016073067350153087,
"acc_norm": 0.36201117318435755,
"acc_norm_stderr": 0.016073067350153087
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7647058823529411,
"acc_stderr": 0.024288619466046095,
"acc_norm": 0.7647058823529411,
"acc_norm_stderr": 0.024288619466046095
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7202572347266881,
"acc_stderr": 0.02549425935069491,
"acc_norm": 0.7202572347266881,
"acc_norm_stderr": 0.02549425935069491
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7808641975308642,
"acc_stderr": 0.02301670564026219,
"acc_norm": 0.7808641975308642,
"acc_norm_stderr": 0.02301670564026219
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.524822695035461,
"acc_stderr": 0.02979071924382972,
"acc_norm": 0.524822695035461,
"acc_norm_stderr": 0.02979071924382972
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4915254237288136,
"acc_stderr": 0.01276840169726906,
"acc_norm": 0.4915254237288136,
"acc_norm_stderr": 0.01276840169726906
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.75,
"acc_stderr": 0.026303648393696036,
"acc_norm": 0.75,
"acc_norm_stderr": 0.026303648393696036
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6944444444444444,
"acc_stderr": 0.018635594034423983,
"acc_norm": 0.6944444444444444,
"acc_norm_stderr": 0.018635594034423983
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7090909090909091,
"acc_stderr": 0.04350271442923243,
"acc_norm": 0.7090909090909091,
"acc_norm_stderr": 0.04350271442923243
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7346938775510204,
"acc_stderr": 0.028263889943784596,
"acc_norm": 0.7346938775510204,
"acc_norm_stderr": 0.028263889943784596
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8407960199004975,
"acc_stderr": 0.02587064676616913,
"acc_norm": 0.8407960199004975,
"acc_norm_stderr": 0.02587064676616913
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.92,
"acc_stderr": 0.0272659924344291,
"acc_norm": 0.92,
"acc_norm_stderr": 0.0272659924344291
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5783132530120482,
"acc_stderr": 0.038444531817709175,
"acc_norm": 0.5783132530120482,
"acc_norm_stderr": 0.038444531817709175
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7719298245614035,
"acc_stderr": 0.032180937956023566,
"acc_norm": 0.7719298245614035,
"acc_norm_stderr": 0.032180937956023566
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5618115055079559,
"mc1_stderr": 0.01736923616440442,
"mc2": 0.7092186670643685,
"mc2_stderr": 0.01520446597729704
},
"harness|winogrande|5": {
"acc": 0.8342541436464088,
"acc_stderr": 0.010450899545370632
},
"harness|gsm8k|5": {
"acc": 0.6459438968915845,
"acc_stderr": 0.013172728385222567
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_bhavinjawade__SOLAR-10B-Nector-DPO-Jawade | [
"region:us"
] | 2024-01-14T21:43:01+00:00 | {"pretty_name": "Evaluation run of bhavinjawade/SOLAR-10B-Nector-DPO-Jawade", "dataset_summary": "Dataset automatically created during the evaluation run of model [bhavinjawade/SOLAR-10B-Nector-DPO-Jawade](https://huggingface.co/bhavinjawade/SOLAR-10B-Nector-DPO-Jawade) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_bhavinjawade__SOLAR-10B-Nector-DPO-Jawade\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-14T21:40:44.530689](https://huggingface.co/datasets/open-llm-leaderboard/details_bhavinjawade__SOLAR-10B-Nector-DPO-Jawade/blob/main/results_2024-01-14T21-40-44.530689.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6659513885128865,\n \"acc_stderr\": 0.03153636640803569,\n \"acc_norm\": 0.6668604037396749,\n \"acc_norm_stderr\": 0.03217609086906697,\n \"mc1\": 0.5618115055079559,\n \"mc1_stderr\": 0.01736923616440442,\n \"mc2\": 0.7092186670643685,\n \"mc2_stderr\": 0.01520446597729704\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6851535836177475,\n \"acc_stderr\": 0.01357265770308495,\n \"acc_norm\": 0.7133105802047781,\n \"acc_norm_stderr\": 0.013214986329274779\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7124078868751245,\n \"acc_stderr\": 0.0045171484341804905,\n \"acc_norm\": 0.8861780521808404,\n \"acc_norm_stderr\": 0.0031694581233577238\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.45,\n \"acc_stderr\": 0.049999999999999996,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.049999999999999996\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6296296296296297,\n \"acc_stderr\": 0.041716541613545426,\n \"acc_norm\": 0.6296296296296297,\n \"acc_norm_stderr\": 0.041716541613545426\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.03523807393012047,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.03523807393012047\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.04461960433384741,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.04461960433384741\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6943396226415094,\n \"acc_stderr\": 0.028353298073322666,\n \"acc_norm\": 0.6943396226415094,\n \"acc_norm_stderr\": 0.028353298073322666\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8055555555555556,\n \"acc_stderr\": 0.03309615177059006,\n \"acc_norm\": 0.8055555555555556,\n \"acc_norm_stderr\": 0.03309615177059006\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6473988439306358,\n \"acc_stderr\": 0.03643037168958548,\n \"acc_norm\": 0.6473988439306358,\n \"acc_norm_stderr\": 0.03643037168958548\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6042553191489362,\n \"acc_stderr\": 0.031967586978353627,\n \"acc_norm\": 0.6042553191489362,\n \"acc_norm_stderr\": 0.031967586978353627\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.4824561403508772,\n \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6137931034482759,\n \"acc_stderr\": 0.04057324734419036,\n \"acc_norm\": 0.6137931034482759,\n \"acc_norm_stderr\": 0.04057324734419036\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.47619047619047616,\n \"acc_stderr\": 0.025722097064388535,\n \"acc_norm\": 0.47619047619047616,\n \"acc_norm_stderr\": 0.025722097064388535\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.40476190476190477,\n \"acc_stderr\": 0.04390259265377562,\n \"acc_norm\": 0.40476190476190477,\n \"acc_norm_stderr\": 0.04390259265377562\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8193548387096774,\n \"acc_stderr\": 0.021886178567172534,\n \"acc_norm\": 0.8193548387096774,\n \"acc_norm_stderr\": 0.021886178567172534\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.49261083743842365,\n \"acc_stderr\": 0.03517603540361008,\n \"acc_norm\": 0.49261083743842365,\n \"acc_norm_stderr\": 0.03517603540361008\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7878787878787878,\n \"acc_stderr\": 0.03192271569548301,\n \"acc_norm\": 0.7878787878787878,\n \"acc_norm_stderr\": 0.03192271569548301\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8838383838383839,\n \"acc_stderr\": 0.022828881775249377,\n \"acc_norm\": 0.8838383838383839,\n \"acc_norm_stderr\": 0.022828881775249377\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9119170984455959,\n \"acc_stderr\": 0.02045374660160103,\n \"acc_norm\": 0.9119170984455959,\n \"acc_norm_stderr\": 0.02045374660160103\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6717948717948717,\n \"acc_stderr\": 0.023807633198657262,\n \"acc_norm\": 0.6717948717948717,\n \"acc_norm_stderr\": 0.023807633198657262\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3814814814814815,\n \"acc_stderr\": 0.029616718927497593,\n \"acc_norm\": 0.3814814814814815,\n \"acc_norm_stderr\": 0.029616718927497593\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7016806722689075,\n \"acc_stderr\": 0.029719142876342853,\n \"acc_norm\": 0.7016806722689075,\n \"acc_norm_stderr\": 0.029719142876342853\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8477064220183487,\n \"acc_stderr\": 0.015405084393157074,\n \"acc_norm\": 0.8477064220183487,\n \"acc_norm_stderr\": 0.015405084393157074\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5462962962962963,\n \"acc_stderr\": 0.033953227263757976,\n \"acc_norm\": 0.5462962962962963,\n \"acc_norm_stderr\": 0.033953227263757976\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8578431372549019,\n \"acc_stderr\": 0.024509803921568617,\n \"acc_norm\": 0.8578431372549019,\n \"acc_norm_stderr\": 0.024509803921568617\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8649789029535865,\n \"acc_stderr\": 0.022245776632003694,\n \"acc_norm\": 0.8649789029535865,\n \"acc_norm_stderr\": 0.022245776632003694\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6860986547085202,\n \"acc_stderr\": 0.03114679648297246,\n \"acc_norm\": 0.6860986547085202,\n \"acc_norm_stderr\": 0.03114679648297246\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7404580152671756,\n \"acc_stderr\": 0.03844876139785271,\n \"acc_norm\": 0.7404580152671756,\n \"acc_norm_stderr\": 0.03844876139785271\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8099173553719008,\n \"acc_stderr\": 0.03581796951709282,\n \"acc_norm\": 0.8099173553719008,\n \"acc_norm_stderr\": 0.03581796951709282\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7962962962962963,\n \"acc_stderr\": 0.03893542518824847,\n \"acc_norm\": 0.7962962962962963,\n \"acc_norm_stderr\": 0.03893542518824847\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7239263803680982,\n \"acc_stderr\": 0.035123852837050475,\n \"acc_norm\": 0.7239263803680982,\n \"acc_norm_stderr\": 0.035123852837050475\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4642857142857143,\n \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.4642857142857143,\n \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8155339805825242,\n \"acc_stderr\": 0.03840423627288276,\n \"acc_norm\": 0.8155339805825242,\n \"acc_norm_stderr\": 0.03840423627288276\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8589743589743589,\n \"acc_stderr\": 0.022801382534597524,\n \"acc_norm\": 0.8589743589743589,\n \"acc_norm_stderr\": 0.022801382534597524\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8020434227330779,\n \"acc_stderr\": 0.014248873549217575,\n \"acc_norm\": 0.8020434227330779,\n \"acc_norm_stderr\": 0.014248873549217575\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7485549132947977,\n \"acc_stderr\": 0.023357365785874037,\n \"acc_norm\": 0.7485549132947977,\n \"acc_norm_stderr\": 0.023357365785874037\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.36201117318435755,\n \"acc_stderr\": 0.016073067350153087,\n \"acc_norm\": 0.36201117318435755,\n \"acc_norm_stderr\": 0.016073067350153087\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7647058823529411,\n \"acc_stderr\": 0.024288619466046095,\n \"acc_norm\": 0.7647058823529411,\n \"acc_norm_stderr\": 0.024288619466046095\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7202572347266881,\n \"acc_stderr\": 0.02549425935069491,\n \"acc_norm\": 0.7202572347266881,\n \"acc_norm_stderr\": 0.02549425935069491\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7808641975308642,\n \"acc_stderr\": 0.02301670564026219,\n \"acc_norm\": 0.7808641975308642,\n \"acc_norm_stderr\": 0.02301670564026219\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.524822695035461,\n \"acc_stderr\": 0.02979071924382972,\n \"acc_norm\": 0.524822695035461,\n \"acc_norm_stderr\": 0.02979071924382972\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4915254237288136,\n \"acc_stderr\": 0.01276840169726906,\n \"acc_norm\": 0.4915254237288136,\n \"acc_norm_stderr\": 0.01276840169726906\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.026303648393696036,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.026303648393696036\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6944444444444444,\n \"acc_stderr\": 0.018635594034423983,\n \"acc_norm\": 0.6944444444444444,\n \"acc_norm_stderr\": 0.018635594034423983\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7090909090909091,\n \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.7090909090909091,\n \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7346938775510204,\n \"acc_stderr\": 0.028263889943784596,\n \"acc_norm\": 0.7346938775510204,\n \"acc_norm_stderr\": 0.028263889943784596\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n \"acc_stderr\": 0.02587064676616913,\n \"acc_norm\": 0.8407960199004975,\n \"acc_norm_stderr\": 0.02587064676616913\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.92,\n \"acc_stderr\": 0.0272659924344291,\n \"acc_norm\": 0.92,\n \"acc_norm_stderr\": 0.0272659924344291\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5783132530120482,\n \"acc_stderr\": 0.038444531817709175,\n \"acc_norm\": 0.5783132530120482,\n \"acc_norm_stderr\": 0.038444531817709175\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7719298245614035,\n \"acc_stderr\": 0.032180937956023566,\n \"acc_norm\": 0.7719298245614035,\n \"acc_norm_stderr\": 0.032180937956023566\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5618115055079559,\n \"mc1_stderr\": 0.01736923616440442,\n \"mc2\": 0.7092186670643685,\n \"mc2_stderr\": 0.01520446597729704\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8342541436464088,\n \"acc_stderr\": 0.010450899545370632\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6459438968915845,\n \"acc_stderr\": 0.013172728385222567\n }\n}\n```", "repo_url": "https://huggingface.co/bhavinjawade/SOLAR-10B-Nector-DPO-Jawade", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|arc:challenge|25_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|gsm8k|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hellaswag|10_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T21-40-44.530689.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["**/details_harness|winogrande|5_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-14T21-40-44.530689.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_14T21_40_44.530689", "path": ["results_2024-01-14T21-40-44.530689.parquet"]}, {"split": "latest", "path": ["results_2024-01-14T21-40-44.530689.parquet"]}]}]} | 2024-01-14T21:43:22+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of bhavinjawade/SOLAR-10B-Nector-DPO-Jawade
Dataset automatically created during the evaluation run of model bhavinjawade/SOLAR-10B-Nector-DPO-Jawade on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-14T21:40:44.530689(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of bhavinjawade/SOLAR-10B-Nector-DPO-Jawade\n\n\n\nDataset automatically created during the evaluation run of model bhavinjawade/SOLAR-10B-Nector-DPO-Jawade on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T21:40:44.530689(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of bhavinjawade/SOLAR-10B-Nector-DPO-Jawade\n\n\n\nDataset automatically created during the evaluation run of model bhavinjawade/SOLAR-10B-Nector-DPO-Jawade on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T21:40:44.530689(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
503a0329f0545ca92e3b629eacad587d01ddcd91 | Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the [Spider Dataset](https://yale-lily.github.io/spider).
Code explanations and links for the model's checkpoints and datasets are on Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql)
Here is the [Hugging Face collection](https://huggingface.co/collections/Marchanjo/mrat-sql-65a671743bb0e70b416561f6), you can download the model's checkpoints and datasets, but to understand is better to go to Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
[paper published in Springer-Nature - International Journal of Information Technology](https://doi.org/10.1007/s41870-023-01342-3), [here the SharedIt link](https://rdcu.be/dff19). [here the pre-print in arXiv](https://arxiv.org/abs/2306.14256).
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
BRACIS 2021: [paper published in Springer Lecture Notes in Computer Science](https://link.springer.com/chapter/10.1007%2F978-3-030-91699-2_35), [here the pre-print in arXiv](https://arxiv.org/abs/2110.03546).
Based on: RAT-SQL+GAP: [Github](https://github.com/awslabs/gap-text2sql). Paper: [AAAI 2021 paper](https://arxiv.org/abs/2012.10309) | Marchanjo/spider-en-extra-3enr-1enb | [
"license:cc-by-sa-4.0",
"arxiv:2306.14256",
"arxiv:2110.03546",
"arxiv:2012.10309",
"region:us"
] | 2024-01-14T21:46:17+00:00 | {"license": "cc-by-sa-4.0"} | 2024-01-16T12:37:42+00:00 | [
"2306.14256",
"2110.03546",
"2012.10309"
] | [] | TAGS
#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us
| Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the Spider Dataset.
Code explanations and links for the model's checkpoints and datasets are on Github mRAT-SQL
Here is the Hugging Face collection, you can download the model's checkpoints and datasets, but to understand is better to go to Github mRAT-SQL.
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.
paper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.
BRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.
Based on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper | [
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] | [
"TAGS\n#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us \n",
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] |
25d605319f0c76580ae913d76d83b6e8b39c40fd |
THis repo contains about 100 rows of random speech to speech vox populi data. can be use for quick testing of code and pipelines | babs/vox-populi-subset | [
"region:us"
] | 2024-01-14T21:48:39+00:00 | {"dataset_info": {"features": [{"name": "source_id", "dtype": "string"}, {"name": "target_id", "dtype": "string"}, {"name": "source_audio", "dtype": "audio"}, {"name": "target_audio", "dtype": "audio"}, {"name": "target_units", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 459597811.0, "num_examples": 1000}], "download_size": 457570458, "dataset_size": 459597811.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-15T19:44:53+00:00 | [] | [] | TAGS
#region-us
|
THis repo contains about 100 rows of random speech to speech vox populi data. can be use for quick testing of code and pipelines | [] | [
"TAGS\n#region-us \n"
] |
40ff2e5f9c768d72de7fee0bb924d0a3b52ec124 |
# Dataset Card for Evaluation run of h2m/mhm-7b-v1.3
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [h2m/mhm-7b-v1.3](https://huggingface.co/h2m/mhm-7b-v1.3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_h2m__mhm-7b-v1.3",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-14T21:47:14.933980](https://huggingface.co/datasets/open-llm-leaderboard/details_h2m__mhm-7b-v1.3/blob/main/results_2024-01-14T21-47-14.933980.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.45565733826199045,
"acc_stderr": 0.034441057472680836,
"acc_norm": 0.46104688055946413,
"acc_norm_stderr": 0.03520094341367283,
"mc1": 0.2864137086903305,
"mc1_stderr": 0.015826142439502353,
"mc2": 0.4622053324775365,
"mc2_stderr": 0.015177238897436999
},
"harness|arc:challenge|25": {
"acc": 0.44197952218430037,
"acc_stderr": 0.014512682523128345,
"acc_norm": 0.47525597269624575,
"acc_norm_stderr": 0.014593487694937738
},
"harness|hellaswag|10": {
"acc": 0.4901414060944035,
"acc_stderr": 0.004988811384747417,
"acc_norm": 0.6530571599283012,
"acc_norm_stderr": 0.004750245757533308
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.22,
"acc_stderr": 0.041633319989322695,
"acc_norm": 0.22,
"acc_norm_stderr": 0.041633319989322695
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.43703703703703706,
"acc_stderr": 0.04284958639753399,
"acc_norm": 0.43703703703703706,
"acc_norm_stderr": 0.04284958639753399
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.506578947368421,
"acc_stderr": 0.040685900502249704,
"acc_norm": 0.506578947368421,
"acc_norm_stderr": 0.040685900502249704
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.42,
"acc_stderr": 0.04960449637488584,
"acc_norm": 0.42,
"acc_norm_stderr": 0.04960449637488584
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.4641509433962264,
"acc_stderr": 0.030693675018458006,
"acc_norm": 0.4641509433962264,
"acc_norm_stderr": 0.030693675018458006
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.039420826399272135,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.039420826399272135
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.27,
"acc_stderr": 0.04461960433384741,
"acc_norm": 0.27,
"acc_norm_stderr": 0.04461960433384741
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.48,
"acc_stderr": 0.05021167315686781,
"acc_norm": 0.48,
"acc_norm_stderr": 0.05021167315686781
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.45664739884393063,
"acc_stderr": 0.03798106566014498,
"acc_norm": 0.45664739884393063,
"acc_norm_stderr": 0.03798106566014498
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.2647058823529412,
"acc_stderr": 0.04389869956808777,
"acc_norm": 0.2647058823529412,
"acc_norm_stderr": 0.04389869956808777
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.35319148936170214,
"acc_stderr": 0.031245325202761926,
"acc_norm": 0.35319148936170214,
"acc_norm_stderr": 0.031245325202761926
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.30701754385964913,
"acc_stderr": 0.043391383225798615,
"acc_norm": 0.30701754385964913,
"acc_norm_stderr": 0.043391383225798615
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.42758620689655175,
"acc_stderr": 0.041227371113703316,
"acc_norm": 0.42758620689655175,
"acc_norm_stderr": 0.041227371113703316
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.2804232804232804,
"acc_stderr": 0.02313528797432563,
"acc_norm": 0.2804232804232804,
"acc_norm_stderr": 0.02313528797432563
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.2777777777777778,
"acc_stderr": 0.04006168083848877,
"acc_norm": 0.2777777777777778,
"acc_norm_stderr": 0.04006168083848877
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.4935483870967742,
"acc_stderr": 0.028441638233540505,
"acc_norm": 0.4935483870967742,
"acc_norm_stderr": 0.028441638233540505
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.35960591133004927,
"acc_stderr": 0.033764582465095665,
"acc_norm": 0.35960591133004927,
"acc_norm_stderr": 0.033764582465095665
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6060606060606061,
"acc_stderr": 0.038154943086889305,
"acc_norm": 0.6060606060606061,
"acc_norm_stderr": 0.038154943086889305
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.5858585858585859,
"acc_stderr": 0.03509438348879629,
"acc_norm": 0.5858585858585859,
"acc_norm_stderr": 0.03509438348879629
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.6062176165803109,
"acc_stderr": 0.035260770955482405,
"acc_norm": 0.6062176165803109,
"acc_norm_stderr": 0.035260770955482405
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.4205128205128205,
"acc_stderr": 0.02502861027671086,
"acc_norm": 0.4205128205128205,
"acc_norm_stderr": 0.02502861027671086
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.22592592592592592,
"acc_stderr": 0.02549753263960955,
"acc_norm": 0.22592592592592592,
"acc_norm_stderr": 0.02549753263960955
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.42436974789915966,
"acc_stderr": 0.032104790510157764,
"acc_norm": 0.42436974789915966,
"acc_norm_stderr": 0.032104790510157764
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33112582781456956,
"acc_stderr": 0.038425817186598696,
"acc_norm": 0.33112582781456956,
"acc_norm_stderr": 0.038425817186598696
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.5944954128440367,
"acc_stderr": 0.021050997991896834,
"acc_norm": 0.5944954128440367,
"acc_norm_stderr": 0.021050997991896834
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.38425925925925924,
"acc_stderr": 0.03317354514310742,
"acc_norm": 0.38425925925925924,
"acc_norm_stderr": 0.03317354514310742
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.5833333333333334,
"acc_stderr": 0.03460228327239171,
"acc_norm": 0.5833333333333334,
"acc_norm_stderr": 0.03460228327239171
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.6371308016877637,
"acc_stderr": 0.03129920825530213,
"acc_norm": 0.6371308016877637,
"acc_norm_stderr": 0.03129920825530213
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.5246636771300448,
"acc_stderr": 0.033516951676526276,
"acc_norm": 0.5246636771300448,
"acc_norm_stderr": 0.033516951676526276
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.5648854961832062,
"acc_stderr": 0.04348208051644858,
"acc_norm": 0.5648854961832062,
"acc_norm_stderr": 0.04348208051644858
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.6363636363636364,
"acc_stderr": 0.043913262867240704,
"acc_norm": 0.6363636363636364,
"acc_norm_stderr": 0.043913262867240704
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.5370370370370371,
"acc_stderr": 0.04820403072760627,
"acc_norm": 0.5370370370370371,
"acc_norm_stderr": 0.04820403072760627
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.4723926380368098,
"acc_stderr": 0.0392237829061099,
"acc_norm": 0.4723926380368098,
"acc_norm_stderr": 0.0392237829061099
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.36607142857142855,
"acc_stderr": 0.04572372358737431,
"acc_norm": 0.36607142857142855,
"acc_norm_stderr": 0.04572372358737431
},
"harness|hendrycksTest-management|5": {
"acc": 0.6407766990291263,
"acc_stderr": 0.04750458399041696,
"acc_norm": 0.6407766990291263,
"acc_norm_stderr": 0.04750458399041696
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.688034188034188,
"acc_stderr": 0.030351527323344934,
"acc_norm": 0.688034188034188,
"acc_norm_stderr": 0.030351527323344934
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.6002554278416348,
"acc_stderr": 0.01751684790705328,
"acc_norm": 0.6002554278416348,
"acc_norm_stderr": 0.01751684790705328
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.4913294797687861,
"acc_stderr": 0.026915047355369804,
"acc_norm": 0.4913294797687861,
"acc_norm_stderr": 0.026915047355369804
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.24804469273743016,
"acc_stderr": 0.014444157808261433,
"acc_norm": 0.24804469273743016,
"acc_norm_stderr": 0.014444157808261433
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.4673202614379085,
"acc_stderr": 0.02856869975222588,
"acc_norm": 0.4673202614379085,
"acc_norm_stderr": 0.02856869975222588
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.48231511254019294,
"acc_stderr": 0.02838032284907713,
"acc_norm": 0.48231511254019294,
"acc_norm_stderr": 0.02838032284907713
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.5092592592592593,
"acc_stderr": 0.027815973433878014,
"acc_norm": 0.5092592592592593,
"acc_norm_stderr": 0.027815973433878014
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.35815602836879434,
"acc_stderr": 0.028602085862759422,
"acc_norm": 0.35815602836879434,
"acc_norm_stderr": 0.028602085862759422
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.34028683181225555,
"acc_stderr": 0.012101217610223784,
"acc_norm": 0.34028683181225555,
"acc_norm_stderr": 0.012101217610223784
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.4338235294117647,
"acc_stderr": 0.030105636570016636,
"acc_norm": 0.4338235294117647,
"acc_norm_stderr": 0.030105636570016636
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.4084967320261438,
"acc_stderr": 0.01988622103750187,
"acc_norm": 0.4084967320261438,
"acc_norm_stderr": 0.01988622103750187
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.4727272727272727,
"acc_stderr": 0.04782001791380063,
"acc_norm": 0.4727272727272727,
"acc_norm_stderr": 0.04782001791380063
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.5469387755102041,
"acc_stderr": 0.03186785930004128,
"acc_norm": 0.5469387755102041,
"acc_norm_stderr": 0.03186785930004128
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.03333333333333334,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.03333333333333334
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4036144578313253,
"acc_stderr": 0.038194861407583984,
"acc_norm": 0.4036144578313253,
"acc_norm_stderr": 0.038194861407583984
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.5730994152046783,
"acc_stderr": 0.03793620616529917,
"acc_norm": 0.5730994152046783,
"acc_norm_stderr": 0.03793620616529917
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2864137086903305,
"mc1_stderr": 0.015826142439502353,
"mc2": 0.4622053324775365,
"mc2_stderr": 0.015177238897436999
},
"harness|winogrande|5": {
"acc": 0.6227308602999211,
"acc_stderr": 0.0136225679287995
},
"harness|gsm8k|5": {
"acc": 0.16679302501895377,
"acc_stderr": 0.010268516042629513
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_h2m__mhm-7b-v1.3 | [
"region:us"
] | 2024-01-14T21:49:34+00:00 | {"pretty_name": "Evaluation run of h2m/mhm-7b-v1.3", "dataset_summary": "Dataset automatically created during the evaluation run of model [h2m/mhm-7b-v1.3](https://huggingface.co/h2m/mhm-7b-v1.3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_h2m__mhm-7b-v1.3\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-14T21:47:14.933980](https://huggingface.co/datasets/open-llm-leaderboard/details_h2m__mhm-7b-v1.3/blob/main/results_2024-01-14T21-47-14.933980.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.45565733826199045,\n \"acc_stderr\": 0.034441057472680836,\n \"acc_norm\": 0.46104688055946413,\n \"acc_norm_stderr\": 0.03520094341367283,\n \"mc1\": 0.2864137086903305,\n \"mc1_stderr\": 0.015826142439502353,\n \"mc2\": 0.4622053324775365,\n \"mc2_stderr\": 0.015177238897436999\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.44197952218430037,\n \"acc_stderr\": 0.014512682523128345,\n \"acc_norm\": 0.47525597269624575,\n \"acc_norm_stderr\": 0.014593487694937738\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.4901414060944035,\n \"acc_stderr\": 0.004988811384747417,\n \"acc_norm\": 0.6530571599283012,\n \"acc_norm_stderr\": 0.004750245757533308\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.22,\n \"acc_stderr\": 0.041633319989322695,\n \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.041633319989322695\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.43703703703703706,\n \"acc_stderr\": 0.04284958639753399,\n \"acc_norm\": 0.43703703703703706,\n \"acc_norm_stderr\": 0.04284958639753399\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.506578947368421,\n \"acc_stderr\": 0.040685900502249704,\n \"acc_norm\": 0.506578947368421,\n \"acc_norm_stderr\": 0.040685900502249704\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.04960449637488584,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.04960449637488584\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.4641509433962264,\n \"acc_stderr\": 0.030693675018458006,\n \"acc_norm\": 0.4641509433962264,\n \"acc_norm_stderr\": 0.030693675018458006\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.039420826399272135,\n \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.039420826399272135\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.04461960433384741,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.04461960433384741\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.05021167315686781,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.05021167315686781\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.45664739884393063,\n \"acc_stderr\": 0.03798106566014498,\n \"acc_norm\": 0.45664739884393063,\n \"acc_norm_stderr\": 0.03798106566014498\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.2647058823529412,\n \"acc_stderr\": 0.04389869956808777,\n \"acc_norm\": 0.2647058823529412,\n \"acc_norm_stderr\": 0.04389869956808777\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.35319148936170214,\n \"acc_stderr\": 0.031245325202761926,\n \"acc_norm\": 0.35319148936170214,\n \"acc_norm_stderr\": 0.031245325202761926\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.30701754385964913,\n \"acc_stderr\": 0.043391383225798615,\n \"acc_norm\": 0.30701754385964913,\n \"acc_norm_stderr\": 0.043391383225798615\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.42758620689655175,\n \"acc_stderr\": 0.041227371113703316,\n \"acc_norm\": 0.42758620689655175,\n \"acc_norm_stderr\": 0.041227371113703316\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.2804232804232804,\n \"acc_stderr\": 0.02313528797432563,\n \"acc_norm\": 0.2804232804232804,\n \"acc_norm_stderr\": 0.02313528797432563\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.2777777777777778,\n \"acc_stderr\": 0.04006168083848877,\n \"acc_norm\": 0.2777777777777778,\n \"acc_norm_stderr\": 0.04006168083848877\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.4935483870967742,\n \"acc_stderr\": 0.028441638233540505,\n \"acc_norm\": 0.4935483870967742,\n \"acc_norm_stderr\": 0.028441638233540505\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.35960591133004927,\n \"acc_stderr\": 0.033764582465095665,\n \"acc_norm\": 0.35960591133004927,\n \"acc_norm_stderr\": 0.033764582465095665\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.6060606060606061,\n \"acc_stderr\": 0.038154943086889305,\n \"acc_norm\": 0.6060606060606061,\n \"acc_norm_stderr\": 0.038154943086889305\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.5858585858585859,\n \"acc_stderr\": 0.03509438348879629,\n \"acc_norm\": 0.5858585858585859,\n \"acc_norm_stderr\": 0.03509438348879629\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.6062176165803109,\n \"acc_stderr\": 0.035260770955482405,\n \"acc_norm\": 0.6062176165803109,\n \"acc_norm_stderr\": 0.035260770955482405\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.4205128205128205,\n \"acc_stderr\": 0.02502861027671086,\n \"acc_norm\": 0.4205128205128205,\n \"acc_norm_stderr\": 0.02502861027671086\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.22592592592592592,\n \"acc_stderr\": 0.02549753263960955,\n \"acc_norm\": 0.22592592592592592,\n \"acc_norm_stderr\": 0.02549753263960955\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.42436974789915966,\n \"acc_stderr\": 0.032104790510157764,\n \"acc_norm\": 0.42436974789915966,\n \"acc_norm_stderr\": 0.032104790510157764\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33112582781456956,\n \"acc_stderr\": 0.038425817186598696,\n \"acc_norm\": 0.33112582781456956,\n \"acc_norm_stderr\": 0.038425817186598696\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.5944954128440367,\n \"acc_stderr\": 0.021050997991896834,\n \"acc_norm\": 0.5944954128440367,\n \"acc_norm_stderr\": 0.021050997991896834\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.38425925925925924,\n \"acc_stderr\": 0.03317354514310742,\n \"acc_norm\": 0.38425925925925924,\n \"acc_norm_stderr\": 0.03317354514310742\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.5833333333333334,\n \"acc_stderr\": 0.03460228327239171,\n \"acc_norm\": 0.5833333333333334,\n \"acc_norm_stderr\": 0.03460228327239171\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.6371308016877637,\n \"acc_stderr\": 0.03129920825530213,\n \"acc_norm\": 0.6371308016877637,\n \"acc_norm_stderr\": 0.03129920825530213\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5246636771300448,\n \"acc_stderr\": 0.033516951676526276,\n \"acc_norm\": 0.5246636771300448,\n \"acc_norm_stderr\": 0.033516951676526276\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.5648854961832062,\n \"acc_stderr\": 0.04348208051644858,\n \"acc_norm\": 0.5648854961832062,\n \"acc_norm_stderr\": 0.04348208051644858\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.6363636363636364,\n \"acc_stderr\": 0.043913262867240704,\n \"acc_norm\": 0.6363636363636364,\n \"acc_norm_stderr\": 0.043913262867240704\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.5370370370370371,\n \"acc_stderr\": 0.04820403072760627,\n \"acc_norm\": 0.5370370370370371,\n \"acc_norm_stderr\": 0.04820403072760627\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.4723926380368098,\n \"acc_stderr\": 0.0392237829061099,\n \"acc_norm\": 0.4723926380368098,\n \"acc_norm_stderr\": 0.0392237829061099\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.36607142857142855,\n \"acc_stderr\": 0.04572372358737431,\n \"acc_norm\": 0.36607142857142855,\n \"acc_norm_stderr\": 0.04572372358737431\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.6407766990291263,\n \"acc_stderr\": 0.04750458399041696,\n \"acc_norm\": 0.6407766990291263,\n \"acc_norm_stderr\": 0.04750458399041696\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.688034188034188,\n \"acc_stderr\": 0.030351527323344934,\n \"acc_norm\": 0.688034188034188,\n \"acc_norm_stderr\": 0.030351527323344934\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.6002554278416348,\n \"acc_stderr\": 0.01751684790705328,\n \"acc_norm\": 0.6002554278416348,\n \"acc_norm_stderr\": 0.01751684790705328\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.4913294797687861,\n \"acc_stderr\": 0.026915047355369804,\n \"acc_norm\": 0.4913294797687861,\n \"acc_norm_stderr\": 0.026915047355369804\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24804469273743016,\n \"acc_stderr\": 0.014444157808261433,\n \"acc_norm\": 0.24804469273743016,\n \"acc_norm_stderr\": 0.014444157808261433\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.4673202614379085,\n \"acc_stderr\": 0.02856869975222588,\n \"acc_norm\": 0.4673202614379085,\n \"acc_norm_stderr\": 0.02856869975222588\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.48231511254019294,\n \"acc_stderr\": 0.02838032284907713,\n \"acc_norm\": 0.48231511254019294,\n \"acc_norm_stderr\": 0.02838032284907713\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.5092592592592593,\n \"acc_stderr\": 0.027815973433878014,\n \"acc_norm\": 0.5092592592592593,\n \"acc_norm_stderr\": 0.027815973433878014\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.35815602836879434,\n \"acc_stderr\": 0.028602085862759422,\n \"acc_norm\": 0.35815602836879434,\n \"acc_norm_stderr\": 0.028602085862759422\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.34028683181225555,\n \"acc_stderr\": 0.012101217610223784,\n \"acc_norm\": 0.34028683181225555,\n \"acc_norm_stderr\": 0.012101217610223784\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.4338235294117647,\n \"acc_stderr\": 0.030105636570016636,\n \"acc_norm\": 0.4338235294117647,\n \"acc_norm_stderr\": 0.030105636570016636\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.4084967320261438,\n \"acc_stderr\": 0.01988622103750187,\n \"acc_norm\": 0.4084967320261438,\n \"acc_norm_stderr\": 0.01988622103750187\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.4727272727272727,\n \"acc_stderr\": 0.04782001791380063,\n \"acc_norm\": 0.4727272727272727,\n \"acc_norm_stderr\": 0.04782001791380063\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.5469387755102041,\n \"acc_stderr\": 0.03186785930004128,\n \"acc_norm\": 0.5469387755102041,\n \"acc_norm_stderr\": 0.03186785930004128\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.03333333333333334,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.03333333333333334\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4036144578313253,\n \"acc_stderr\": 0.038194861407583984,\n \"acc_norm\": 0.4036144578313253,\n \"acc_norm_stderr\": 0.038194861407583984\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.5730994152046783,\n \"acc_stderr\": 0.03793620616529917,\n \"acc_norm\": 0.5730994152046783,\n \"acc_norm_stderr\": 0.03793620616529917\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2864137086903305,\n \"mc1_stderr\": 0.015826142439502353,\n \"mc2\": 0.4622053324775365,\n \"mc2_stderr\": 0.015177238897436999\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6227308602999211,\n \"acc_stderr\": 0.0136225679287995\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.16679302501895377,\n \"acc_stderr\": 0.010268516042629513\n }\n}\n```", "repo_url": "https://huggingface.co/h2m/mhm-7b-v1.3", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|arc:challenge|25_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|gsm8k|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hellaswag|10_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T21-47-14.933980.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["**/details_harness|winogrande|5_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-14T21-47-14.933980.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_14T21_47_14.933980", "path": ["results_2024-01-14T21-47-14.933980.parquet"]}, {"split": "latest", "path": ["results_2024-01-14T21-47-14.933980.parquet"]}]}]} | 2024-01-14T21:49:54+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of h2m/mhm-7b-v1.3
Dataset automatically created during the evaluation run of model h2m/mhm-7b-v1.3 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-14T21:47:14.933980(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of h2m/mhm-7b-v1.3\n\n\n\nDataset automatically created during the evaluation run of model h2m/mhm-7b-v1.3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T21:47:14.933980(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of h2m/mhm-7b-v1.3\n\n\n\nDataset automatically created during the evaluation run of model h2m/mhm-7b-v1.3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T21:47:14.933980(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
d9cb7bf3db3f837856d2730cb1bd24fc4adda679 | Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the [Spider Dataset](https://yale-lily.github.io/spider).
Code explanations and links for the model's checkpoints and datasets are on Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql)
Here is the [Hugging Face collection](https://huggingface.co/collections/Marchanjo/mrat-sql-65a671743bb0e70b416561f6), you can download the model's checkpoints and datasets, but to understand is better to go to Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
[paper published in Springer-Nature - International Journal of Information Technology](https://doi.org/10.1007/s41870-023-01342-3), [here the SharedIt link](https://rdcu.be/dff19). [here the pre-print in arXiv](https://arxiv.org/abs/2306.14256).
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
BRACIS 2021: [paper published in Springer Lecture Notes in Computer Science](https://link.springer.com/chapter/10.1007%2F978-3-030-91699-2_35), [here the pre-print in arXiv](https://arxiv.org/abs/2110.03546).
Based on: RAT-SQL+GAP: [Github](https://github.com/awslabs/gap-text2sql). Paper: [AAAI 2021 paper](https://arxiv.org/abs/2012.10309) | Marchanjo/spider-pt | [
"license:cc-by-sa-4.0",
"arxiv:2306.14256",
"arxiv:2110.03546",
"arxiv:2012.10309",
"region:us"
] | 2024-01-14T21:49:51+00:00 | {"license": "cc-by-sa-4.0"} | 2024-01-16T12:37:56+00:00 | [
"2306.14256",
"2110.03546",
"2012.10309"
] | [] | TAGS
#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us
| Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the Spider Dataset.
Code explanations and links for the model's checkpoints and datasets are on Github mRAT-SQL
Here is the Hugging Face collection, you can download the model's checkpoints and datasets, but to understand is better to go to Github mRAT-SQL.
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.
paper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.
BRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.
Based on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper | [
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] | [
"TAGS\n#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us \n",
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] |
a7727d4aabe6ec9926da703f922c0e55546b5838 | Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the [Spider Dataset](https://yale-lily.github.io/spider).
Code explanations and links for the model's checkpoints and datasets are on Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql)
Here is the [Hugging Face collection](https://huggingface.co/collections/Marchanjo/mrat-sql-65a671743bb0e70b416561f6), you can download the model's checkpoints and datasets, but to understand is better to go to Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
[paper published in Springer-Nature - International Journal of Information Technology](https://doi.org/10.1007/s41870-023-01342-3), [here the SharedIt link](https://rdcu.be/dff19). [here the pre-print in arXiv](https://arxiv.org/abs/2306.14256).
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
BRACIS 2021: [paper published in Springer Lecture Notes in Computer Science](https://link.springer.com/chapter/10.1007%2F978-3-030-91699-2_35), [here the pre-print in arXiv](https://arxiv.org/abs/2110.03546).
Based on: RAT-SQL+GAP: [Github](https://github.com/awslabs/gap-text2sql). Paper: [AAAI 2021 paper](https://arxiv.org/abs/2012.10309) | Marchanjo/spider-es | [
"license:cc-by-sa-4.0",
"arxiv:2306.14256",
"arxiv:2110.03546",
"arxiv:2012.10309",
"region:us"
] | 2024-01-14T21:56:38+00:00 | {"license": "cc-by-sa-4.0"} | 2024-01-16T12:38:11+00:00 | [
"2306.14256",
"2110.03546",
"2012.10309"
] | [] | TAGS
#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us
| Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the Spider Dataset.
Code explanations and links for the model's checkpoints and datasets are on Github mRAT-SQL
Here is the Hugging Face collection, you can download the model's checkpoints and datasets, but to understand is better to go to Github mRAT-SQL.
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.
paper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.
BRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.
Based on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper | [
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] | [
"TAGS\n#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us \n",
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] |
c8c8f5d77f54414309c1d2e1ec17f59963d6c185 | Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the [Spider Dataset](https://yale-lily.github.io/spider).
Code explanations and links for the model's checkpoints and datasets are on Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql)
Here is the [Hugging Face collection](https://huggingface.co/collections/Marchanjo/mrat-sql-65a671743bb0e70b416561f6), you can download the model's checkpoints and datasets, but to understand is better to go to Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
[paper published in Springer-Nature - International Journal of Information Technology](https://doi.org/10.1007/s41870-023-01342-3), [here the SharedIt link](https://rdcu.be/dff19). [here the pre-print in arXiv](https://arxiv.org/abs/2306.14256).
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
BRACIS 2021: [paper published in Springer Lecture Notes in Computer Science](https://link.springer.com/chapter/10.1007%2F978-3-030-91699-2_35), [here the pre-print in arXiv](https://arxiv.org/abs/2110.03546).
Based on: RAT-SQL+GAP: [Github](https://github.com/awslabs/gap-text2sql). Paper: [AAAI 2021 paper](https://arxiv.org/abs/2012.10309) | Marchanjo/spider-fr | [
"license:cc-by-sa-4.0",
"arxiv:2306.14256",
"arxiv:2110.03546",
"arxiv:2012.10309",
"region:us"
] | 2024-01-14T22:00:51+00:00 | {"license": "cc-by-sa-4.0"} | 2024-01-16T12:38:38+00:00 | [
"2306.14256",
"2110.03546",
"2012.10309"
] | [] | TAGS
#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us
| Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the Spider Dataset.
Code explanations and links for the model's checkpoints and datasets are on Github mRAT-SQL
Here is the Hugging Face collection, you can download the model's checkpoints and datasets, but to understand is better to go to Github mRAT-SQL.
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.
paper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.
BRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.
Based on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper | [
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] | [
"TAGS\n#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us \n",
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] |
c14fdcd0e4725b7ab08d55f345df495d070a29d8 | Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the [Spider Dataset](https://yale-lily.github.io/spider).
Code explanations and links for the model's checkpoints and datasets are on Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql)
Here is the [Hugging Face collection](https://huggingface.co/collections/Marchanjo/mrat-sql-65a671743bb0e70b416561f6), you can download the model's checkpoints and datasets, but to understand is better to go to Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
[paper published in Springer-Nature - International Journal of Information Technology](https://doi.org/10.1007/s41870-023-01342-3), [here the SharedIt link](https://rdcu.be/dff19). [here the pre-print in arXiv](https://arxiv.org/abs/2306.14256).
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
BRACIS 2021: [paper published in Springer Lecture Notes in Computer Science](https://link.springer.com/chapter/10.1007%2F978-3-030-91699-2_35), [here the pre-print in arXiv](https://arxiv.org/abs/2110.03546).
Based on: RAT-SQL+GAP: [Github](https://github.com/awslabs/gap-text2sql). Paper: [AAAI 2021 paper](https://arxiv.org/abs/2012.10309) | Marchanjo/spider-en-pt | [
"license:cc-by-sa-4.0",
"arxiv:2306.14256",
"arxiv:2110.03546",
"arxiv:2012.10309",
"region:us"
] | 2024-01-14T22:04:25+00:00 | {"license": "cc-by-sa-4.0"} | 2024-01-16T12:38:57+00:00 | [
"2306.14256",
"2110.03546",
"2012.10309"
] | [] | TAGS
#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us
| Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the Spider Dataset.
Code explanations and links for the model's checkpoints and datasets are on Github mRAT-SQL
Here is the Hugging Face collection, you can download the model's checkpoints and datasets, but to understand is better to go to Github mRAT-SQL.
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.
paper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.
BRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.
Based on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper | [
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] | [
"TAGS\n#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us \n",
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] |
b1b693e12b8732688fde334dd05021fc7e05421b | Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the [Spider Dataset](https://yale-lily.github.io/spider).
Code explanations and links for the model's checkpoints and datasets are on Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql)
Here is the [Hugging Face collection](https://huggingface.co/collections/Marchanjo/mrat-sql-65a671743bb0e70b416561f6), you can download the model's checkpoints and datasets, but to understand is better to go to Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
[paper published in Springer-Nature - International Journal of Information Technology](https://doi.org/10.1007/s41870-023-01342-3), [here the SharedIt link](https://rdcu.be/dff19). [here the pre-print in arXiv](https://arxiv.org/abs/2306.14256).
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
BRACIS 2021: [paper published in Springer Lecture Notes in Computer Science](https://link.springer.com/chapter/10.1007%2F978-3-030-91699-2_35), [here the pre-print in arXiv](https://arxiv.org/abs/2110.03546).
Based on: RAT-SQL+GAP: [Github](https://github.com/awslabs/gap-text2sql). Paper: [AAAI 2021 paper](https://arxiv.org/abs/2012.10309) | Marchanjo/spider-en-pt-es-fr | [
"license:cc-by-sa-4.0",
"arxiv:2306.14256",
"arxiv:2110.03546",
"arxiv:2012.10309",
"region:us"
] | 2024-01-14T22:10:51+00:00 | {"license": "cc-by-sa-4.0"} | 2024-01-16T12:43:13+00:00 | [
"2306.14256",
"2110.03546",
"2012.10309"
] | [] | TAGS
#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us
| Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the Spider Dataset.
Code explanations and links for the model's checkpoints and datasets are on Github mRAT-SQL
Here is the Hugging Face collection, you can download the model's checkpoints and datasets, but to understand is better to go to Github mRAT-SQL.
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.
paper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.
BRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.
Based on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper | [
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] | [
"TAGS\n#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us \n",
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] |
0ab51c1dbcda1a84dba2bbdf2a5fee3d8c4b34fe |
# Dataset Card for Evaluation run of Locutusque/Rhino-Mistral-7B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Locutusque/Rhino-Mistral-7B](https://huggingface.co/Locutusque/Rhino-Mistral-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Locutusque__Rhino-Mistral-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-14T22:10:37.195277](https://huggingface.co/datasets/open-llm-leaderboard/details_Locutusque__Rhino-Mistral-7B/blob/main/results_2024-01-14T22-10-37.195277.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.48839477081957533,
"acc_stderr": 0.034627634904041645,
"acc_norm": 0.49321170014255594,
"acc_norm_stderr": 0.0353856151916697,
"mc1": 0.2741738066095471,
"mc1_stderr": 0.015616518497219367,
"mc2": 0.4589835712394215,
"mc2_stderr": 0.014873298625532366
},
"harness|arc:challenge|25": {
"acc": 0.43430034129692835,
"acc_stderr": 0.014484703048857364,
"acc_norm": 0.4812286689419795,
"acc_norm_stderr": 0.014601090150633964
},
"harness|hellaswag|10": {
"acc": 0.5212109141605258,
"acc_stderr": 0.004985289555586536,
"acc_norm": 0.7142003584943238,
"acc_norm_stderr": 0.004508710891053852
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.4888888888888889,
"acc_stderr": 0.04318275491977976,
"acc_norm": 0.4888888888888889,
"acc_norm_stderr": 0.04318275491977976
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.45394736842105265,
"acc_stderr": 0.04051646342874142,
"acc_norm": 0.45394736842105265,
"acc_norm_stderr": 0.04051646342874142
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.5283018867924528,
"acc_stderr": 0.0307235352490061,
"acc_norm": 0.5283018867924528,
"acc_norm_stderr": 0.0307235352490061
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.4583333333333333,
"acc_stderr": 0.04166666666666665,
"acc_norm": 0.4583333333333333,
"acc_norm_stderr": 0.04166666666666665
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.41,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.41,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.4,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.4,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.4682080924855491,
"acc_stderr": 0.03804749744364763,
"acc_norm": 0.4682080924855491,
"acc_norm_stderr": 0.03804749744364763
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3137254901960784,
"acc_stderr": 0.04617034827006718,
"acc_norm": 0.3137254901960784,
"acc_norm_stderr": 0.04617034827006718
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.6,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.6,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.4085106382978723,
"acc_stderr": 0.03213418026701576,
"acc_norm": 0.4085106382978723,
"acc_norm_stderr": 0.03213418026701576
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2982456140350877,
"acc_stderr": 0.04303684033537315,
"acc_norm": 0.2982456140350877,
"acc_norm_stderr": 0.04303684033537315
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.503448275862069,
"acc_stderr": 0.04166567577101579,
"acc_norm": 0.503448275862069,
"acc_norm_stderr": 0.04166567577101579
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.31746031746031744,
"acc_stderr": 0.023973861998992072,
"acc_norm": 0.31746031746031744,
"acc_norm_stderr": 0.023973861998992072
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.2698412698412698,
"acc_stderr": 0.03970158273235173,
"acc_norm": 0.2698412698412698,
"acc_norm_stderr": 0.03970158273235173
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.28,
"acc_stderr": 0.045126085985421276,
"acc_norm": 0.28,
"acc_norm_stderr": 0.045126085985421276
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.5612903225806452,
"acc_stderr": 0.028229497320317216,
"acc_norm": 0.5612903225806452,
"acc_norm_stderr": 0.028229497320317216
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.3645320197044335,
"acc_stderr": 0.0338640574606209,
"acc_norm": 0.3645320197044335,
"acc_norm_stderr": 0.0338640574606209
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.43,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.43,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6606060606060606,
"acc_stderr": 0.03697442205031595,
"acc_norm": 0.6606060606060606,
"acc_norm_stderr": 0.03697442205031595
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.6161616161616161,
"acc_stderr": 0.03464881675016338,
"acc_norm": 0.6161616161616161,
"acc_norm_stderr": 0.03464881675016338
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.6476683937823834,
"acc_stderr": 0.03447478286414358,
"acc_norm": 0.6476683937823834,
"acc_norm_stderr": 0.03447478286414358
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.4076923076923077,
"acc_stderr": 0.024915243985987847,
"acc_norm": 0.4076923076923077,
"acc_norm_stderr": 0.024915243985987847
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.25925925925925924,
"acc_stderr": 0.026719240783712156,
"acc_norm": 0.25925925925925924,
"acc_norm_stderr": 0.026719240783712156
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.4789915966386555,
"acc_stderr": 0.032449808499900284,
"acc_norm": 0.4789915966386555,
"acc_norm_stderr": 0.032449808499900284
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33112582781456956,
"acc_stderr": 0.038425817186598696,
"acc_norm": 0.33112582781456956,
"acc_norm_stderr": 0.038425817186598696
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.655045871559633,
"acc_stderr": 0.020380605405066962,
"acc_norm": 0.655045871559633,
"acc_norm_stderr": 0.020380605405066962
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4722222222222222,
"acc_stderr": 0.0340470532865388,
"acc_norm": 0.4722222222222222,
"acc_norm_stderr": 0.0340470532865388
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.5637254901960784,
"acc_stderr": 0.03480693138457039,
"acc_norm": 0.5637254901960784,
"acc_norm_stderr": 0.03480693138457039
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.6118143459915611,
"acc_stderr": 0.03172295004332329,
"acc_norm": 0.6118143459915611,
"acc_norm_stderr": 0.03172295004332329
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.5650224215246636,
"acc_stderr": 0.033272833702713445,
"acc_norm": 0.5650224215246636,
"acc_norm_stderr": 0.033272833702713445
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.5877862595419847,
"acc_stderr": 0.04317171194870254,
"acc_norm": 0.5877862595419847,
"acc_norm_stderr": 0.04317171194870254
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.6198347107438017,
"acc_stderr": 0.04431324501968431,
"acc_norm": 0.6198347107438017,
"acc_norm_stderr": 0.04431324501968431
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.5462962962962963,
"acc_stderr": 0.04812917324536823,
"acc_norm": 0.5462962962962963,
"acc_norm_stderr": 0.04812917324536823
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.49693251533742333,
"acc_stderr": 0.03928297078179662,
"acc_norm": 0.49693251533742333,
"acc_norm_stderr": 0.03928297078179662
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4375,
"acc_stderr": 0.04708567521880525,
"acc_norm": 0.4375,
"acc_norm_stderr": 0.04708567521880525
},
"harness|hendrycksTest-management|5": {
"acc": 0.6990291262135923,
"acc_stderr": 0.04541609446503948,
"acc_norm": 0.6990291262135923,
"acc_norm_stderr": 0.04541609446503948
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.6709401709401709,
"acc_stderr": 0.030782321577688173,
"acc_norm": 0.6709401709401709,
"acc_norm_stderr": 0.030782321577688173
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.51,
"acc_stderr": 0.05024183937956913,
"acc_norm": 0.51,
"acc_norm_stderr": 0.05024183937956913
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.6577266922094508,
"acc_stderr": 0.016967031766413624,
"acc_norm": 0.6577266922094508,
"acc_norm_stderr": 0.016967031766413624
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.5173410404624278,
"acc_stderr": 0.02690290045866664,
"acc_norm": 0.5173410404624278,
"acc_norm_stderr": 0.02690290045866664
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3206703910614525,
"acc_stderr": 0.015609929559348408,
"acc_norm": 0.3206703910614525,
"acc_norm_stderr": 0.015609929559348408
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.5294117647058824,
"acc_stderr": 0.028580341065138286,
"acc_norm": 0.5294117647058824,
"acc_norm_stderr": 0.028580341065138286
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.5401929260450161,
"acc_stderr": 0.028306190403305693,
"acc_norm": 0.5401929260450161,
"acc_norm_stderr": 0.028306190403305693
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.5308641975308642,
"acc_stderr": 0.02776768960683392,
"acc_norm": 0.5308641975308642,
"acc_norm_stderr": 0.02776768960683392
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.35106382978723405,
"acc_stderr": 0.02847350127296376,
"acc_norm": 0.35106382978723405,
"acc_norm_stderr": 0.02847350127296376
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.36114732724902215,
"acc_stderr": 0.012267935477519028,
"acc_norm": 0.36114732724902215,
"acc_norm_stderr": 0.012267935477519028
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.4632352941176471,
"acc_stderr": 0.030290619180485694,
"acc_norm": 0.4632352941176471,
"acc_norm_stderr": 0.030290619180485694
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.4362745098039216,
"acc_stderr": 0.02006287424353913,
"acc_norm": 0.4362745098039216,
"acc_norm_stderr": 0.02006287424353913
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.5272727272727272,
"acc_stderr": 0.04782001791380061,
"acc_norm": 0.5272727272727272,
"acc_norm_stderr": 0.04782001791380061
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.5510204081632653,
"acc_stderr": 0.03184213866687579,
"acc_norm": 0.5510204081632653,
"acc_norm_stderr": 0.03184213866687579
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.6218905472636815,
"acc_stderr": 0.034288678487786564,
"acc_norm": 0.6218905472636815,
"acc_norm_stderr": 0.034288678487786564
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.76,
"acc_stderr": 0.042923469599092816,
"acc_norm": 0.76,
"acc_norm_stderr": 0.042923469599092816
},
"harness|hendrycksTest-virology|5": {
"acc": 0.45180722891566266,
"acc_stderr": 0.03874371556587953,
"acc_norm": 0.45180722891566266,
"acc_norm_stderr": 0.03874371556587953
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7017543859649122,
"acc_stderr": 0.03508771929824563,
"acc_norm": 0.7017543859649122,
"acc_norm_stderr": 0.03508771929824563
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2741738066095471,
"mc1_stderr": 0.015616518497219367,
"mc2": 0.4589835712394215,
"mc2_stderr": 0.014873298625532366
},
"harness|winogrande|5": {
"acc": 0.7111286503551697,
"acc_stderr": 0.012738241271018445
},
"harness|gsm8k|5": {
"acc": 0.221379833206975,
"acc_stderr": 0.011436000004253521
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_Locutusque__Rhino-Mistral-7B | [
"region:us"
] | 2024-01-14T22:13:01+00:00 | {"pretty_name": "Evaluation run of Locutusque/Rhino-Mistral-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [Locutusque/Rhino-Mistral-7B](https://huggingface.co/Locutusque/Rhino-Mistral-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Locutusque__Rhino-Mistral-7B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-14T22:10:37.195277](https://huggingface.co/datasets/open-llm-leaderboard/details_Locutusque__Rhino-Mistral-7B/blob/main/results_2024-01-14T22-10-37.195277.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.48839477081957533,\n \"acc_stderr\": 0.034627634904041645,\n \"acc_norm\": 0.49321170014255594,\n \"acc_norm_stderr\": 0.0353856151916697,\n \"mc1\": 0.2741738066095471,\n \"mc1_stderr\": 0.015616518497219367,\n \"mc2\": 0.4589835712394215,\n \"mc2_stderr\": 0.014873298625532366\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.43430034129692835,\n \"acc_stderr\": 0.014484703048857364,\n \"acc_norm\": 0.4812286689419795,\n \"acc_norm_stderr\": 0.014601090150633964\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5212109141605258,\n \"acc_stderr\": 0.004985289555586536,\n \"acc_norm\": 0.7142003584943238,\n \"acc_norm_stderr\": 0.004508710891053852\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4888888888888889,\n \"acc_stderr\": 0.04318275491977976,\n \"acc_norm\": 0.4888888888888889,\n \"acc_norm_stderr\": 0.04318275491977976\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.45394736842105265,\n \"acc_stderr\": 0.04051646342874142,\n \"acc_norm\": 0.45394736842105265,\n \"acc_norm_stderr\": 0.04051646342874142\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.5283018867924528,\n \"acc_stderr\": 0.0307235352490061,\n \"acc_norm\": 0.5283018867924528,\n \"acc_norm_stderr\": 0.0307235352490061\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.4583333333333333,\n \"acc_stderr\": 0.04166666666666665,\n \"acc_norm\": 0.4583333333333333,\n \"acc_norm_stderr\": 0.04166666666666665\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.4682080924855491,\n \"acc_stderr\": 0.03804749744364763,\n \"acc_norm\": 0.4682080924855491,\n \"acc_norm_stderr\": 0.03804749744364763\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3137254901960784,\n \"acc_stderr\": 0.04617034827006718,\n \"acc_norm\": 0.3137254901960784,\n \"acc_norm_stderr\": 0.04617034827006718\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.4085106382978723,\n \"acc_stderr\": 0.03213418026701576,\n \"acc_norm\": 0.4085106382978723,\n \"acc_norm_stderr\": 0.03213418026701576\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2982456140350877,\n \"acc_stderr\": 0.04303684033537315,\n \"acc_norm\": 0.2982456140350877,\n \"acc_norm_stderr\": 0.04303684033537315\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.503448275862069,\n \"acc_stderr\": 0.04166567577101579,\n \"acc_norm\": 0.503448275862069,\n \"acc_norm_stderr\": 0.04166567577101579\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.31746031746031744,\n \"acc_stderr\": 0.023973861998992072,\n \"acc_norm\": 0.31746031746031744,\n \"acc_norm_stderr\": 0.023973861998992072\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.2698412698412698,\n \"acc_stderr\": 0.03970158273235173,\n \"acc_norm\": 0.2698412698412698,\n \"acc_norm_stderr\": 0.03970158273235173\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.045126085985421276,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.045126085985421276\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.5612903225806452,\n \"acc_stderr\": 0.028229497320317216,\n \"acc_norm\": 0.5612903225806452,\n \"acc_norm_stderr\": 0.028229497320317216\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.3645320197044335,\n \"acc_stderr\": 0.0338640574606209,\n \"acc_norm\": 0.3645320197044335,\n \"acc_norm_stderr\": 0.0338640574606209\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.6606060606060606,\n \"acc_stderr\": 0.03697442205031595,\n \"acc_norm\": 0.6606060606060606,\n \"acc_norm_stderr\": 0.03697442205031595\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.6161616161616161,\n \"acc_stderr\": 0.03464881675016338,\n \"acc_norm\": 0.6161616161616161,\n \"acc_norm_stderr\": 0.03464881675016338\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.6476683937823834,\n \"acc_stderr\": 0.03447478286414358,\n \"acc_norm\": 0.6476683937823834,\n \"acc_norm_stderr\": 0.03447478286414358\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.4076923076923077,\n \"acc_stderr\": 0.024915243985987847,\n \"acc_norm\": 0.4076923076923077,\n \"acc_norm_stderr\": 0.024915243985987847\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.25925925925925924,\n \"acc_stderr\": 0.026719240783712156,\n \"acc_norm\": 0.25925925925925924,\n \"acc_norm_stderr\": 0.026719240783712156\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.4789915966386555,\n \"acc_stderr\": 0.032449808499900284,\n \"acc_norm\": 0.4789915966386555,\n \"acc_norm_stderr\": 0.032449808499900284\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33112582781456956,\n \"acc_stderr\": 0.038425817186598696,\n \"acc_norm\": 0.33112582781456956,\n \"acc_norm_stderr\": 0.038425817186598696\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.655045871559633,\n \"acc_stderr\": 0.020380605405066962,\n \"acc_norm\": 0.655045871559633,\n \"acc_norm_stderr\": 0.020380605405066962\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4722222222222222,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\": 0.4722222222222222,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.5637254901960784,\n \"acc_stderr\": 0.03480693138457039,\n \"acc_norm\": 0.5637254901960784,\n \"acc_norm_stderr\": 0.03480693138457039\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.6118143459915611,\n \"acc_stderr\": 0.03172295004332329,\n \"acc_norm\": 0.6118143459915611,\n \"acc_norm_stderr\": 0.03172295004332329\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5650224215246636,\n \"acc_stderr\": 0.033272833702713445,\n \"acc_norm\": 0.5650224215246636,\n \"acc_norm_stderr\": 0.033272833702713445\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.5877862595419847,\n \"acc_stderr\": 0.04317171194870254,\n \"acc_norm\": 0.5877862595419847,\n \"acc_norm_stderr\": 0.04317171194870254\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.6198347107438017,\n \"acc_stderr\": 0.04431324501968431,\n \"acc_norm\": 0.6198347107438017,\n \"acc_norm_stderr\": 0.04431324501968431\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.5462962962962963,\n \"acc_stderr\": 0.04812917324536823,\n \"acc_norm\": 0.5462962962962963,\n \"acc_norm_stderr\": 0.04812917324536823\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.49693251533742333,\n \"acc_stderr\": 0.03928297078179662,\n \"acc_norm\": 0.49693251533742333,\n \"acc_norm_stderr\": 0.03928297078179662\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4375,\n \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.4375,\n \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.6990291262135923,\n \"acc_stderr\": 0.04541609446503948,\n \"acc_norm\": 0.6990291262135923,\n \"acc_norm_stderr\": 0.04541609446503948\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.6709401709401709,\n \"acc_stderr\": 0.030782321577688173,\n \"acc_norm\": 0.6709401709401709,\n \"acc_norm_stderr\": 0.030782321577688173\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956913,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956913\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.6577266922094508,\n \"acc_stderr\": 0.016967031766413624,\n \"acc_norm\": 0.6577266922094508,\n \"acc_norm_stderr\": 0.016967031766413624\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.5173410404624278,\n \"acc_stderr\": 0.02690290045866664,\n \"acc_norm\": 0.5173410404624278,\n \"acc_norm_stderr\": 0.02690290045866664\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3206703910614525,\n \"acc_stderr\": 0.015609929559348408,\n \"acc_norm\": 0.3206703910614525,\n \"acc_norm_stderr\": 0.015609929559348408\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.5294117647058824,\n \"acc_stderr\": 0.028580341065138286,\n \"acc_norm\": 0.5294117647058824,\n \"acc_norm_stderr\": 0.028580341065138286\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.5401929260450161,\n \"acc_stderr\": 0.028306190403305693,\n \"acc_norm\": 0.5401929260450161,\n \"acc_norm_stderr\": 0.028306190403305693\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.5308641975308642,\n \"acc_stderr\": 0.02776768960683392,\n \"acc_norm\": 0.5308641975308642,\n \"acc_norm_stderr\": 0.02776768960683392\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.35106382978723405,\n \"acc_stderr\": 0.02847350127296376,\n \"acc_norm\": 0.35106382978723405,\n \"acc_norm_stderr\": 0.02847350127296376\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.36114732724902215,\n \"acc_stderr\": 0.012267935477519028,\n \"acc_norm\": 0.36114732724902215,\n \"acc_norm_stderr\": 0.012267935477519028\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.4632352941176471,\n \"acc_stderr\": 0.030290619180485694,\n \"acc_norm\": 0.4632352941176471,\n \"acc_norm_stderr\": 0.030290619180485694\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.4362745098039216,\n \"acc_stderr\": 0.02006287424353913,\n \"acc_norm\": 0.4362745098039216,\n \"acc_norm_stderr\": 0.02006287424353913\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.5272727272727272,\n \"acc_stderr\": 0.04782001791380061,\n \"acc_norm\": 0.5272727272727272,\n \"acc_norm_stderr\": 0.04782001791380061\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.5510204081632653,\n \"acc_stderr\": 0.03184213866687579,\n \"acc_norm\": 0.5510204081632653,\n \"acc_norm_stderr\": 0.03184213866687579\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6218905472636815,\n \"acc_stderr\": 0.034288678487786564,\n \"acc_norm\": 0.6218905472636815,\n \"acc_norm_stderr\": 0.034288678487786564\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.45180722891566266,\n \"acc_stderr\": 0.03874371556587953,\n \"acc_norm\": 0.45180722891566266,\n \"acc_norm_stderr\": 0.03874371556587953\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7017543859649122,\n \"acc_stderr\": 0.03508771929824563,\n \"acc_norm\": 0.7017543859649122,\n \"acc_norm_stderr\": 0.03508771929824563\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2741738066095471,\n \"mc1_stderr\": 0.015616518497219367,\n \"mc2\": 0.4589835712394215,\n \"mc2_stderr\": 0.014873298625532366\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7111286503551697,\n \"acc_stderr\": 0.012738241271018445\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.221379833206975,\n \"acc_stderr\": 0.011436000004253521\n }\n}\n```", "repo_url": "https://huggingface.co/Locutusque/Rhino-Mistral-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|arc:challenge|25_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|gsm8k|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hellaswag|10_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T22-10-37.195277.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["**/details_harness|winogrande|5_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-14T22-10-37.195277.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_14T22_10_37.195277", "path": ["results_2024-01-14T22-10-37.195277.parquet"]}, {"split": "latest", "path": ["results_2024-01-14T22-10-37.195277.parquet"]}]}]} | 2024-01-14T22:13:22+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Locutusque/Rhino-Mistral-7B
Dataset automatically created during the evaluation run of model Locutusque/Rhino-Mistral-7B on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-14T22:10:37.195277(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of Locutusque/Rhino-Mistral-7B\n\n\n\nDataset automatically created during the evaluation run of model Locutusque/Rhino-Mistral-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T22:10:37.195277(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Locutusque/Rhino-Mistral-7B\n\n\n\nDataset automatically created during the evaluation run of model Locutusque/Rhino-Mistral-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T22:10:37.195277(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
c333c0646b7da531a9eed8f33fd059dce7ec7886 | Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the [Spider Dataset](https://yale-lily.github.io/spider).
Code explanations and links for the model's checkpoints and datasets are on Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql)
Here is the [Hugging Face collection](https://huggingface.co/collections/Marchanjo/mrat-sql-65a671743bb0e70b416561f6), you can download the model's checkpoints and datasets, but to understand is better to go to Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
[paper published in Springer-Nature - International Journal of Information Technology](https://doi.org/10.1007/s41870-023-01342-3), [here the SharedIt link](https://rdcu.be/dff19). [here the pre-print in arXiv](https://arxiv.org/abs/2306.14256).
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
BRACIS 2021: [paper published in Springer Lecture Notes in Computer Science](https://link.springer.com/chapter/10.1007%2F978-3-030-91699-2_35), [here the pre-print in arXiv](https://arxiv.org/abs/2110.03546).
Based on: RAT-SQL+GAP: [Github](https://github.com/awslabs/gap-text2sql). Paper: [AAAI 2021 paper](https://arxiv.org/abs/2012.10309) | Marchanjo/spider-en-pt-es-fr-enr-enb | [
"license:cc-by-sa-4.0",
"arxiv:2306.14256",
"arxiv:2110.03546",
"arxiv:2012.10309",
"region:us"
] | 2024-01-14T22:22:27+00:00 | {"license": "cc-by-sa-4.0"} | 2024-01-16T12:39:30+00:00 | [
"2306.14256",
"2110.03546",
"2012.10309"
] | [] | TAGS
#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us
| Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the Spider Dataset.
Code explanations and links for the model's checkpoints and datasets are on Github mRAT-SQL
Here is the Hugging Face collection, you can download the model's checkpoints and datasets, but to understand is better to go to Github mRAT-SQL.
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.
paper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.
BRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.
Based on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper | [
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] | [
"TAGS\n#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us \n",
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] |
27d2c438a943e4786bc8ac67fa663708b60530c1 |
This is a subset (2000 samples) of [`timdettmers/openassistant-guanaco`](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) dataset, processed to match Mistral-7B-instruct-v0.2's prompt format as described [in this article](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). It was created using the [colab notebook](https://colab.research.google.com/drive/1afeicfJa9Mo8-wEcDoGrjyoVLyFkF9xm?usp=sharing).
Inspired by Maxime Labonne's [llm-course repo](https://github.com/mlabonne/llm-course).
| wenqiglantz/guanaco-llama2-2k | [
"region:us"
] | 2024-01-14T22:27:26+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3211457, "num_examples": 2000}], "download_size": 1887239, "dataset_size": 3211457}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-16T04:30:43+00:00 | [] | [] | TAGS
#region-us
|
This is a subset (2000 samples) of 'timdettmers/openassistant-guanaco' dataset, processed to match Mistral-7B-instruct-v0.2's prompt format as described in this article. It was created using the colab notebook.
Inspired by Maxime Labonne's llm-course repo.
| [] | [
"TAGS\n#region-us \n"
] |
d593cde04b493e89bab6a4fa30be891c05c41c0b | Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the [Spider Dataset](https://yale-lily.github.io/spider).
Code explanations and links for the model's checkpoints and datasets are on Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql)
Here is the [Hugging Face collection](https://huggingface.co/collections/Marchanjo/mrat-sql-65a671743bb0e70b416561f6), you can download the model's checkpoints and datasets, but to understand is better to go to Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
[paper published in Springer-Nature - International Journal of Information Technology](https://doi.org/10.1007/s41870-023-01342-3), [here the SharedIt link](https://rdcu.be/dff19). [here the pre-print in arXiv](https://arxiv.org/abs/2306.14256).
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
BRACIS 2021: [paper published in Springer Lecture Notes in Computer Science](https://link.springer.com/chapter/10.1007%2F978-3-030-91699-2_35), [here the pre-print in arXiv](https://arxiv.org/abs/2110.03546).
Based on: RAT-SQL+GAP: [Github](https://github.com/awslabs/gap-text2sql). Paper: [AAAI 2021 paper](https://arxiv.org/abs/2012.10309) | Marchanjo/spider-en-pt-es-fr-extra-3enr-3ptr-3esr-3frr | [
"license:cc-by-sa-4.0",
"arxiv:2306.14256",
"arxiv:2110.03546",
"arxiv:2012.10309",
"region:us"
] | 2024-01-14T22:31:23+00:00 | {"license": "cc-by-sa-4.0"} | 2024-01-16T12:39:46+00:00 | [
"2306.14256",
"2110.03546",
"2012.10309"
] | [] | TAGS
#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us
| Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the Spider Dataset.
Code explanations and links for the model's checkpoints and datasets are on Github mRAT-SQL
Here is the Hugging Face collection, you can download the model's checkpoints and datasets, but to understand is better to go to Github mRAT-SQL.
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.
paper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.
BRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.
Based on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper | [
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] | [
"TAGS\n#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us \n",
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] |
d4e3597a54259009bd3ba52c7cd135b1930c4fa0 | # Dataset Card for "autotrain-data-autotrain-tres"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | pedromigurasdev/autotrain-data-autotrain-tres | [
"region:us"
] | 2024-01-14T22:39:23+00:00 | {"dataset_info": {"features": [{"name": "autotrain_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1145062, "num_examples": 758}, {"name": "validation", "num_bytes": 1145062, "num_examples": 758}], "download_size": 1344524, "dataset_size": 2290124}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]} | 2024-01-14T22:39:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "autotrain-data-autotrain-tres"
More Information needed | [
"# Dataset Card for \"autotrain-data-autotrain-tres\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"autotrain-data-autotrain-tres\"\n\nMore Information needed"
] |
8e4a5ce0d07a6c5603c3a4181b5fe7e8388bc8eb |
# Dataset Card for Evaluation run of CallComply/SOLAR-10.7B-Instruct-v1.0-128k
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [CallComply/SOLAR-10.7B-Instruct-v1.0-128k](https://huggingface.co/CallComply/SOLAR-10.7B-Instruct-v1.0-128k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CallComply__SOLAR-10.7B-Instruct-v1.0-128k",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-14T22:38:12.148949](https://huggingface.co/datasets/open-llm-leaderboard/details_CallComply__SOLAR-10.7B-Instruct-v1.0-128k/blob/main/results_2024-01-14T22-38-12.148949.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5736345987046274,
"acc_stderr": 0.033417579618165875,
"acc_norm": 0.5822139213719528,
"acc_norm_stderr": 0.03421698352385503,
"mc1": 0.48592411260709917,
"mc1_stderr": 0.017496563717042793,
"mc2": 0.6542262778057006,
"mc2_stderr": 0.015681013574816827
},
"harness|arc:challenge|25": {
"acc": 0.6262798634812287,
"acc_stderr": 0.014137708601759091,
"acc_norm": 0.659556313993174,
"acc_norm_stderr": 0.013847460518892973
},
"harness|hellaswag|10": {
"acc": 0.6415056761601274,
"acc_stderr": 0.004785781979354868,
"acc_norm": 0.8434574785899224,
"acc_norm_stderr": 0.003626262805442223
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.26,
"acc_stderr": 0.0440844002276808,
"acc_norm": 0.26,
"acc_norm_stderr": 0.0440844002276808
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.562962962962963,
"acc_stderr": 0.04284958639753401,
"acc_norm": 0.562962962962963,
"acc_norm_stderr": 0.04284958639753401
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.625,
"acc_stderr": 0.039397364351956274,
"acc_norm": 0.625,
"acc_norm_stderr": 0.039397364351956274
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.59,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.59,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6188679245283019,
"acc_stderr": 0.02989060968628664,
"acc_norm": 0.6188679245283019,
"acc_norm_stderr": 0.02989060968628664
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6527777777777778,
"acc_stderr": 0.039812405437178615,
"acc_norm": 0.6527777777777778,
"acc_norm_stderr": 0.039812405437178615
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.4,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.4,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6184971098265896,
"acc_stderr": 0.03703851193099522,
"acc_norm": 0.6184971098265896,
"acc_norm_stderr": 0.03703851193099522
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107223,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107223
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5659574468085107,
"acc_stderr": 0.03240038086792747,
"acc_norm": 0.5659574468085107,
"acc_norm_stderr": 0.03240038086792747
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.3508771929824561,
"acc_stderr": 0.04489539350270699,
"acc_norm": 0.3508771929824561,
"acc_norm_stderr": 0.04489539350270699
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5655172413793104,
"acc_stderr": 0.04130740879555497,
"acc_norm": 0.5655172413793104,
"acc_norm_stderr": 0.04130740879555497
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3412698412698413,
"acc_stderr": 0.024419234966819064,
"acc_norm": 0.3412698412698413,
"acc_norm_stderr": 0.024419234966819064
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.35714285714285715,
"acc_stderr": 0.04285714285714281,
"acc_norm": 0.35714285714285715,
"acc_norm_stderr": 0.04285714285714281
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.38,
"acc_stderr": 0.04878317312145633,
"acc_norm": 0.38,
"acc_norm_stderr": 0.04878317312145633
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6387096774193548,
"acc_stderr": 0.027327548447957543,
"acc_norm": 0.6387096774193548,
"acc_norm_stderr": 0.027327548447957543
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4088669950738916,
"acc_stderr": 0.034590588158832314,
"acc_norm": 0.4088669950738916,
"acc_norm_stderr": 0.034590588158832314
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.6,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.6,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.6424242424242425,
"acc_stderr": 0.03742597043806585,
"acc_norm": 0.6424242424242425,
"acc_norm_stderr": 0.03742597043806585
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7676767676767676,
"acc_stderr": 0.03008862949021749,
"acc_norm": 0.7676767676767676,
"acc_norm_stderr": 0.03008862949021749
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.844559585492228,
"acc_stderr": 0.02614848346915332,
"acc_norm": 0.844559585492228,
"acc_norm_stderr": 0.02614848346915332
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5769230769230769,
"acc_stderr": 0.025049197876042338,
"acc_norm": 0.5769230769230769,
"acc_norm_stderr": 0.025049197876042338
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2814814814814815,
"acc_stderr": 0.027420019350945284,
"acc_norm": 0.2814814814814815,
"acc_norm_stderr": 0.027420019350945284
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6302521008403361,
"acc_stderr": 0.03135709599613591,
"acc_norm": 0.6302521008403361,
"acc_norm_stderr": 0.03135709599613591
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3443708609271523,
"acc_stderr": 0.038796870240733264,
"acc_norm": 0.3443708609271523,
"acc_norm_stderr": 0.038796870240733264
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.763302752293578,
"acc_stderr": 0.018224078117299106,
"acc_norm": 0.763302752293578,
"acc_norm_stderr": 0.018224078117299106
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4212962962962963,
"acc_stderr": 0.03367462138896078,
"acc_norm": 0.4212962962962963,
"acc_norm_stderr": 0.03367462138896078
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7352941176470589,
"acc_stderr": 0.030964517926923403,
"acc_norm": 0.7352941176470589,
"acc_norm_stderr": 0.030964517926923403
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7341772151898734,
"acc_stderr": 0.02875679962965834,
"acc_norm": 0.7341772151898734,
"acc_norm_stderr": 0.02875679962965834
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6367713004484304,
"acc_stderr": 0.032277904428505,
"acc_norm": 0.6367713004484304,
"acc_norm_stderr": 0.032277904428505
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6946564885496184,
"acc_stderr": 0.040393149787245605,
"acc_norm": 0.6946564885496184,
"acc_norm_stderr": 0.040393149787245605
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8016528925619835,
"acc_stderr": 0.03640118271990948,
"acc_norm": 0.8016528925619835,
"acc_norm_stderr": 0.03640118271990948
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.75,
"acc_stderr": 0.04186091791394607,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04186091791394607
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7177914110429447,
"acc_stderr": 0.03536117886664742,
"acc_norm": 0.7177914110429447,
"acc_norm_stderr": 0.03536117886664742
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.41964285714285715,
"acc_stderr": 0.046840993210771065,
"acc_norm": 0.41964285714285715,
"acc_norm_stderr": 0.046840993210771065
},
"harness|hendrycksTest-management|5": {
"acc": 0.7475728155339806,
"acc_stderr": 0.04301250399690878,
"acc_norm": 0.7475728155339806,
"acc_norm_stderr": 0.04301250399690878
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8205128205128205,
"acc_stderr": 0.025140935950335445,
"acc_norm": 0.8205128205128205,
"acc_norm_stderr": 0.025140935950335445
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939098,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939098
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7547892720306514,
"acc_stderr": 0.015384352284543932,
"acc_norm": 0.7547892720306514,
"acc_norm_stderr": 0.015384352284543932
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6734104046242775,
"acc_stderr": 0.025248264774242836,
"acc_norm": 0.6734104046242775,
"acc_norm_stderr": 0.025248264774242836
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.31620111731843575,
"acc_stderr": 0.015551673652172544,
"acc_norm": 0.31620111731843575,
"acc_norm_stderr": 0.015551673652172544
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.630718954248366,
"acc_stderr": 0.027634176689602653,
"acc_norm": 0.630718954248366,
"acc_norm_stderr": 0.027634176689602653
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6077170418006431,
"acc_stderr": 0.027731258647011994,
"acc_norm": 0.6077170418006431,
"acc_norm_stderr": 0.027731258647011994
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6265432098765432,
"acc_stderr": 0.026915003011380154,
"acc_norm": 0.6265432098765432,
"acc_norm_stderr": 0.026915003011380154
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4148936170212766,
"acc_stderr": 0.029392236584612503,
"acc_norm": 0.4148936170212766,
"acc_norm_stderr": 0.029392236584612503
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.423728813559322,
"acc_stderr": 0.012620785155885992,
"acc_norm": 0.423728813559322,
"acc_norm_stderr": 0.012620785155885992
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5073529411764706,
"acc_stderr": 0.030369552523902173,
"acc_norm": 0.5073529411764706,
"acc_norm_stderr": 0.030369552523902173
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6045751633986928,
"acc_stderr": 0.019780465954777518,
"acc_norm": 0.6045751633986928,
"acc_norm_stderr": 0.019780465954777518
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6181818181818182,
"acc_stderr": 0.04653429807913508,
"acc_norm": 0.6181818181818182,
"acc_norm_stderr": 0.04653429807913508
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7142857142857143,
"acc_stderr": 0.0289205832206756,
"acc_norm": 0.7142857142857143,
"acc_norm_stderr": 0.0289205832206756
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.4975124378109453,
"acc_stderr": 0.03535490150137288,
"acc_norm": 0.4975124378109453,
"acc_norm_stderr": 0.03535490150137288
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.81,
"acc_stderr": 0.03942772444036625,
"acc_norm": 0.81,
"acc_norm_stderr": 0.03942772444036625
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5,
"acc_stderr": 0.03892494720807614,
"acc_norm": 0.5,
"acc_norm_stderr": 0.03892494720807614
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7485380116959064,
"acc_stderr": 0.033275044238468436,
"acc_norm": 0.7485380116959064,
"acc_norm_stderr": 0.033275044238468436
},
"harness|truthfulqa:mc|0": {
"mc1": 0.48592411260709917,
"mc1_stderr": 0.017496563717042793,
"mc2": 0.6542262778057006,
"mc2_stderr": 0.015681013574816827
},
"harness|winogrande|5": {
"acc": 0.8050513022888713,
"acc_stderr": 0.011134099415938256
},
"harness|gsm8k|5": {
"acc": 0.0712661106899166,
"acc_stderr": 0.0070864621279544985
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_CallComply__SOLAR-10.7B-Instruct-v1.0-128k | [
"region:us"
] | 2024-01-14T22:40:30+00:00 | {"pretty_name": "Evaluation run of CallComply/SOLAR-10.7B-Instruct-v1.0-128k", "dataset_summary": "Dataset automatically created during the evaluation run of model [CallComply/SOLAR-10.7B-Instruct-v1.0-128k](https://huggingface.co/CallComply/SOLAR-10.7B-Instruct-v1.0-128k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CallComply__SOLAR-10.7B-Instruct-v1.0-128k\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-14T22:38:12.148949](https://huggingface.co/datasets/open-llm-leaderboard/details_CallComply__SOLAR-10.7B-Instruct-v1.0-128k/blob/main/results_2024-01-14T22-38-12.148949.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5736345987046274,\n \"acc_stderr\": 0.033417579618165875,\n \"acc_norm\": 0.5822139213719528,\n \"acc_norm_stderr\": 0.03421698352385503,\n \"mc1\": 0.48592411260709917,\n \"mc1_stderr\": 0.017496563717042793,\n \"mc2\": 0.6542262778057006,\n \"mc2_stderr\": 0.015681013574816827\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6262798634812287,\n \"acc_stderr\": 0.014137708601759091,\n \"acc_norm\": 0.659556313993174,\n \"acc_norm_stderr\": 0.013847460518892973\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6415056761601274,\n \"acc_stderr\": 0.004785781979354868,\n \"acc_norm\": 0.8434574785899224,\n \"acc_norm_stderr\": 0.003626262805442223\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.562962962962963,\n \"acc_stderr\": 0.04284958639753401,\n \"acc_norm\": 0.562962962962963,\n \"acc_norm_stderr\": 0.04284958639753401\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.625,\n \"acc_stderr\": 0.039397364351956274,\n \"acc_norm\": 0.625,\n \"acc_norm_stderr\": 0.039397364351956274\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.59,\n \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.59,\n \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6188679245283019,\n \"acc_stderr\": 0.02989060968628664,\n \"acc_norm\": 0.6188679245283019,\n \"acc_norm_stderr\": 0.02989060968628664\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6527777777777778,\n \"acc_stderr\": 0.039812405437178615,\n \"acc_norm\": 0.6527777777777778,\n \"acc_norm_stderr\": 0.039812405437178615\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6184971098265896,\n \"acc_stderr\": 0.03703851193099522,\n \"acc_norm\": 0.6184971098265896,\n \"acc_norm_stderr\": 0.03703851193099522\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5659574468085107,\n \"acc_stderr\": 0.03240038086792747,\n \"acc_norm\": 0.5659574468085107,\n \"acc_norm_stderr\": 0.03240038086792747\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.3508771929824561,\n \"acc_stderr\": 0.04489539350270699,\n \"acc_norm\": 0.3508771929824561,\n \"acc_norm_stderr\": 0.04489539350270699\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5655172413793104,\n \"acc_stderr\": 0.04130740879555497,\n \"acc_norm\": 0.5655172413793104,\n \"acc_norm_stderr\": 0.04130740879555497\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3412698412698413,\n \"acc_stderr\": 0.024419234966819064,\n \"acc_norm\": 0.3412698412698413,\n \"acc_norm_stderr\": 0.024419234966819064\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.35714285714285715,\n \"acc_stderr\": 0.04285714285714281,\n \"acc_norm\": 0.35714285714285715,\n \"acc_norm_stderr\": 0.04285714285714281\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6387096774193548,\n \"acc_stderr\": 0.027327548447957543,\n \"acc_norm\": 0.6387096774193548,\n \"acc_norm_stderr\": 0.027327548447957543\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4088669950738916,\n \"acc_stderr\": 0.034590588158832314,\n \"acc_norm\": 0.4088669950738916,\n \"acc_norm_stderr\": 0.034590588158832314\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.6424242424242425,\n \"acc_stderr\": 0.03742597043806585,\n \"acc_norm\": 0.6424242424242425,\n \"acc_norm_stderr\": 0.03742597043806585\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7676767676767676,\n \"acc_stderr\": 0.03008862949021749,\n \"acc_norm\": 0.7676767676767676,\n \"acc_norm_stderr\": 0.03008862949021749\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.844559585492228,\n \"acc_stderr\": 0.02614848346915332,\n \"acc_norm\": 0.844559585492228,\n \"acc_norm_stderr\": 0.02614848346915332\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5769230769230769,\n \"acc_stderr\": 0.025049197876042338,\n \"acc_norm\": 0.5769230769230769,\n \"acc_norm_stderr\": 0.025049197876042338\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.2814814814814815,\n \"acc_stderr\": 0.027420019350945284,\n \"acc_norm\": 0.2814814814814815,\n \"acc_norm_stderr\": 0.027420019350945284\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6302521008403361,\n \"acc_stderr\": 0.03135709599613591,\n \"acc_norm\": 0.6302521008403361,\n \"acc_norm_stderr\": 0.03135709599613591\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3443708609271523,\n \"acc_stderr\": 0.038796870240733264,\n \"acc_norm\": 0.3443708609271523,\n \"acc_norm_stderr\": 0.038796870240733264\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.763302752293578,\n \"acc_stderr\": 0.018224078117299106,\n \"acc_norm\": 0.763302752293578,\n \"acc_norm_stderr\": 0.018224078117299106\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4212962962962963,\n \"acc_stderr\": 0.03367462138896078,\n \"acc_norm\": 0.4212962962962963,\n \"acc_norm_stderr\": 0.03367462138896078\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7352941176470589,\n \"acc_stderr\": 0.030964517926923403,\n \"acc_norm\": 0.7352941176470589,\n \"acc_norm_stderr\": 0.030964517926923403\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7341772151898734,\n \"acc_stderr\": 0.02875679962965834,\n \"acc_norm\": 0.7341772151898734,\n \"acc_norm_stderr\": 0.02875679962965834\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6367713004484304,\n \"acc_stderr\": 0.032277904428505,\n \"acc_norm\": 0.6367713004484304,\n \"acc_norm_stderr\": 0.032277904428505\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.6946564885496184,\n \"acc_stderr\": 0.040393149787245605,\n \"acc_norm\": 0.6946564885496184,\n \"acc_norm_stderr\": 0.040393149787245605\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8016528925619835,\n \"acc_stderr\": 0.03640118271990948,\n \"acc_norm\": 0.8016528925619835,\n \"acc_norm_stderr\": 0.03640118271990948\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04186091791394607,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04186091791394607\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7177914110429447,\n \"acc_stderr\": 0.03536117886664742,\n \"acc_norm\": 0.7177914110429447,\n \"acc_norm_stderr\": 0.03536117886664742\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.41964285714285715,\n \"acc_stderr\": 0.046840993210771065,\n \"acc_norm\": 0.41964285714285715,\n \"acc_norm_stderr\": 0.046840993210771065\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7475728155339806,\n \"acc_stderr\": 0.04301250399690878,\n \"acc_norm\": 0.7475728155339806,\n \"acc_norm_stderr\": 0.04301250399690878\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8205128205128205,\n \"acc_stderr\": 0.025140935950335445,\n \"acc_norm\": 0.8205128205128205,\n \"acc_norm_stderr\": 0.025140935950335445\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939098,\n \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.04852365870939098\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7547892720306514,\n \"acc_stderr\": 0.015384352284543932,\n \"acc_norm\": 0.7547892720306514,\n \"acc_norm_stderr\": 0.015384352284543932\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6734104046242775,\n \"acc_stderr\": 0.025248264774242836,\n \"acc_norm\": 0.6734104046242775,\n \"acc_norm_stderr\": 0.025248264774242836\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.31620111731843575,\n \"acc_stderr\": 0.015551673652172544,\n \"acc_norm\": 0.31620111731843575,\n \"acc_norm_stderr\": 0.015551673652172544\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.630718954248366,\n \"acc_stderr\": 0.027634176689602653,\n \"acc_norm\": 0.630718954248366,\n \"acc_norm_stderr\": 0.027634176689602653\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6077170418006431,\n \"acc_stderr\": 0.027731258647011994,\n \"acc_norm\": 0.6077170418006431,\n \"acc_norm_stderr\": 0.027731258647011994\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6265432098765432,\n \"acc_stderr\": 0.026915003011380154,\n \"acc_norm\": 0.6265432098765432,\n \"acc_norm_stderr\": 0.026915003011380154\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4148936170212766,\n \"acc_stderr\": 0.029392236584612503,\n \"acc_norm\": 0.4148936170212766,\n \"acc_norm_stderr\": 0.029392236584612503\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.423728813559322,\n \"acc_stderr\": 0.012620785155885992,\n \"acc_norm\": 0.423728813559322,\n \"acc_norm_stderr\": 0.012620785155885992\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.5073529411764706,\n \"acc_stderr\": 0.030369552523902173,\n \"acc_norm\": 0.5073529411764706,\n \"acc_norm_stderr\": 0.030369552523902173\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6045751633986928,\n \"acc_stderr\": 0.019780465954777518,\n \"acc_norm\": 0.6045751633986928,\n \"acc_norm_stderr\": 0.019780465954777518\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6181818181818182,\n \"acc_stderr\": 0.04653429807913508,\n \"acc_norm\": 0.6181818181818182,\n \"acc_norm_stderr\": 0.04653429807913508\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.0289205832206756,\n \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.0289205832206756\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.4975124378109453,\n \"acc_stderr\": 0.03535490150137288,\n \"acc_norm\": 0.4975124378109453,\n \"acc_norm_stderr\": 0.03535490150137288\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.81,\n \"acc_stderr\": 0.03942772444036625,\n \"acc_norm\": 0.81,\n \"acc_norm_stderr\": 0.03942772444036625\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.03892494720807614,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.03892494720807614\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7485380116959064,\n \"acc_stderr\": 0.033275044238468436,\n \"acc_norm\": 0.7485380116959064,\n \"acc_norm_stderr\": 0.033275044238468436\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.48592411260709917,\n \"mc1_stderr\": 0.017496563717042793,\n \"mc2\": 0.6542262778057006,\n \"mc2_stderr\": 0.015681013574816827\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8050513022888713,\n \"acc_stderr\": 0.011134099415938256\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0712661106899166,\n \"acc_stderr\": 0.0070864621279544985\n }\n}\n```", "repo_url": "https://huggingface.co/CallComply/SOLAR-10.7B-Instruct-v1.0-128k", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|arc:challenge|25_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|gsm8k|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hellaswag|10_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T22-38-12.148949.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["**/details_harness|winogrande|5_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-14T22-38-12.148949.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_14T22_38_12.148949", "path": ["results_2024-01-14T22-38-12.148949.parquet"]}, {"split": "latest", "path": ["results_2024-01-14T22-38-12.148949.parquet"]}]}]} | 2024-01-14T22:40:49+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of CallComply/SOLAR-10.7B-Instruct-v1.0-128k
Dataset automatically created during the evaluation run of model CallComply/SOLAR-10.7B-Instruct-v1.0-128k on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-14T22:38:12.148949(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of CallComply/SOLAR-10.7B-Instruct-v1.0-128k\n\n\n\nDataset automatically created during the evaluation run of model CallComply/SOLAR-10.7B-Instruct-v1.0-128k on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T22:38:12.148949(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of CallComply/SOLAR-10.7B-Instruct-v1.0-128k\n\n\n\nDataset automatically created during the evaluation run of model CallComply/SOLAR-10.7B-Instruct-v1.0-128k on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T22:38:12.148949(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
c632ea8e45be1687f511a2d02818538073bc985c | Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the [Spider Dataset](https://yale-lily.github.io/spider).
Code explanations and links for the model's checkpoints and datasets are on Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql)
Here is the [Hugging Face collection](https://huggingface.co/collections/Marchanjo/mrat-sql-65a671743bb0e70b416561f6), you can download the model's checkpoints and datasets, but to understand is better to go to Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
[paper published in Springer-Nature - International Journal of Information Technology](https://doi.org/10.1007/s41870-023-01342-3), [here the SharedIt link](https://rdcu.be/dff19). [here the pre-print in arXiv](https://arxiv.org/abs/2306.14256).
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
BRACIS 2021: [paper published in Springer Lecture Notes in Computer Science](https://link.springer.com/chapter/10.1007%2F978-3-030-91699-2_35), [here the pre-print in arXiv](https://arxiv.org/abs/2110.03546).
Based on: RAT-SQL+GAP: [Github](https://github.com/awslabs/gap-text2sql). Paper: [AAAI 2021 paper](https://arxiv.org/abs/2012.10309) | Marchanjo/spider-FIT-en | [
"license:cc-by-sa-4.0",
"arxiv:2306.14256",
"arxiv:2110.03546",
"arxiv:2012.10309",
"region:us"
] | 2024-01-14T22:50:27+00:00 | {"license": "cc-by-sa-4.0"} | 2024-01-16T12:42:58+00:00 | [
"2306.14256",
"2110.03546",
"2012.10309"
] | [] | TAGS
#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us
| Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the Spider Dataset.
Code explanations and links for the model's checkpoints and datasets are on Github mRAT-SQL
Here is the Hugging Face collection, you can download the model's checkpoints and datasets, but to understand is better to go to Github mRAT-SQL.
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.
paper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.
BRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.
Based on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper | [
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] | [
"TAGS\n#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us \n",
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] |
91dba49e507b26408827372c45e6899f7b4fb59a |
# Dataset Card for Evaluation run of CallComply/Starling-LM-11B-alpha
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [CallComply/Starling-LM-11B-alpha](https://huggingface.co/CallComply/Starling-LM-11B-alpha) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CallComply__Starling-LM-11B-alpha",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-14T22:50:55.626486](https://huggingface.co/datasets/open-llm-leaderboard/details_CallComply__Starling-LM-11B-alpha/blob/main/results_2024-01-14T22-50-55.626486.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6124497978149351,
"acc_stderr": 0.032857819921299845,
"acc_norm": 0.618390298674969,
"acc_norm_stderr": 0.03352975999467289,
"mc1": 0.25703794369645044,
"mc1_stderr": 0.01529807750948508,
"mc2": 0.4153002055665266,
"mc2_stderr": 0.014702058713161457
},
"harness|arc:challenge|25": {
"acc": 0.5639931740614335,
"acc_stderr": 0.014491225699230916,
"acc_norm": 0.6126279863481229,
"acc_norm_stderr": 0.01423587248790987
},
"harness|hellaswag|10": {
"acc": 0.6105357498506274,
"acc_stderr": 0.0048663222583359665,
"acc_norm": 0.8198566022704641,
"acc_norm_stderr": 0.0038352143402103785
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5703703703703704,
"acc_stderr": 0.042763494943765995,
"acc_norm": 0.5703703703703704,
"acc_norm_stderr": 0.042763494943765995
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6776315789473685,
"acc_stderr": 0.03803510248351585,
"acc_norm": 0.6776315789473685,
"acc_norm_stderr": 0.03803510248351585
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.6,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6830188679245283,
"acc_stderr": 0.028637235639800893,
"acc_norm": 0.6830188679245283,
"acc_norm_stderr": 0.028637235639800893
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6597222222222222,
"acc_stderr": 0.039621355734862175,
"acc_norm": 0.6597222222222222,
"acc_norm_stderr": 0.039621355734862175
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.653179190751445,
"acc_stderr": 0.036291466701596636,
"acc_norm": 0.653179190751445,
"acc_norm_stderr": 0.036291466701596636
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4117647058823529,
"acc_stderr": 0.04897104952726366,
"acc_norm": 0.4117647058823529,
"acc_norm_stderr": 0.04897104952726366
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.502127659574468,
"acc_stderr": 0.032685726586674915,
"acc_norm": 0.502127659574468,
"acc_norm_stderr": 0.032685726586674915
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.41228070175438597,
"acc_stderr": 0.046306532033665956,
"acc_norm": 0.41228070175438597,
"acc_norm_stderr": 0.046306532033665956
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.593103448275862,
"acc_stderr": 0.04093793981266236,
"acc_norm": 0.593103448275862,
"acc_norm_stderr": 0.04093793981266236
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.41005291005291006,
"acc_stderr": 0.025331202438944444,
"acc_norm": 0.41005291005291006,
"acc_norm_stderr": 0.025331202438944444
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4603174603174603,
"acc_stderr": 0.04458029125470973,
"acc_norm": 0.4603174603174603,
"acc_norm_stderr": 0.04458029125470973
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7580645161290323,
"acc_stderr": 0.024362599693031086,
"acc_norm": 0.7580645161290323,
"acc_norm_stderr": 0.024362599693031086
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4630541871921182,
"acc_stderr": 0.035083705204426656,
"acc_norm": 0.4630541871921182,
"acc_norm_stderr": 0.035083705204426656
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.64,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.64,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7272727272727273,
"acc_stderr": 0.0347769116216366,
"acc_norm": 0.7272727272727273,
"acc_norm_stderr": 0.0347769116216366
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7474747474747475,
"acc_stderr": 0.030954055470365897,
"acc_norm": 0.7474747474747475,
"acc_norm_stderr": 0.030954055470365897
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8756476683937824,
"acc_stderr": 0.023814477086593556,
"acc_norm": 0.8756476683937824,
"acc_norm_stderr": 0.023814477086593556
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6410256410256411,
"acc_stderr": 0.02432173848460235,
"acc_norm": 0.6410256410256411,
"acc_norm_stderr": 0.02432173848460235
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3037037037037037,
"acc_stderr": 0.02803792996911499,
"acc_norm": 0.3037037037037037,
"acc_norm_stderr": 0.02803792996911499
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6428571428571429,
"acc_stderr": 0.031124619309328177,
"acc_norm": 0.6428571428571429,
"acc_norm_stderr": 0.031124619309328177
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3708609271523179,
"acc_stderr": 0.03943966699183629,
"acc_norm": 0.3708609271523179,
"acc_norm_stderr": 0.03943966699183629
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8146788990825689,
"acc_stderr": 0.01665927970029582,
"acc_norm": 0.8146788990825689,
"acc_norm_stderr": 0.01665927970029582
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4583333333333333,
"acc_stderr": 0.033981108902946366,
"acc_norm": 0.4583333333333333,
"acc_norm_stderr": 0.033981108902946366
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7598039215686274,
"acc_stderr": 0.02998373305591361,
"acc_norm": 0.7598039215686274,
"acc_norm_stderr": 0.02998373305591361
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7848101265822784,
"acc_stderr": 0.02675082699467617,
"acc_norm": 0.7848101265822784,
"acc_norm_stderr": 0.02675082699467617
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6412556053811659,
"acc_stderr": 0.03219079200419996,
"acc_norm": 0.6412556053811659,
"acc_norm_stderr": 0.03219079200419996
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7633587786259542,
"acc_stderr": 0.03727673575596914,
"acc_norm": 0.7633587786259542,
"acc_norm_stderr": 0.03727673575596914
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.768595041322314,
"acc_stderr": 0.038498560987940876,
"acc_norm": 0.768595041322314,
"acc_norm_stderr": 0.038498560987940876
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.04557239513497751,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.04557239513497751
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7116564417177914,
"acc_stderr": 0.035590395316173425,
"acc_norm": 0.7116564417177914,
"acc_norm_stderr": 0.035590395316173425
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4642857142857143,
"acc_stderr": 0.04733667890053756,
"acc_norm": 0.4642857142857143,
"acc_norm_stderr": 0.04733667890053756
},
"harness|hendrycksTest-management|5": {
"acc": 0.8058252427184466,
"acc_stderr": 0.03916667762822584,
"acc_norm": 0.8058252427184466,
"acc_norm_stderr": 0.03916667762822584
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8504273504273504,
"acc_stderr": 0.023365051491753715,
"acc_norm": 0.8504273504273504,
"acc_norm_stderr": 0.023365051491753715
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8020434227330779,
"acc_stderr": 0.014248873549217582,
"acc_norm": 0.8020434227330779,
"acc_norm_stderr": 0.014248873549217582
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6907514450867052,
"acc_stderr": 0.02488314057007176,
"acc_norm": 0.6907514450867052,
"acc_norm_stderr": 0.02488314057007176
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4245810055865922,
"acc_stderr": 0.01653117099327889,
"acc_norm": 0.4245810055865922,
"acc_norm_stderr": 0.01653117099327889
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6830065359477124,
"acc_stderr": 0.026643278474508755,
"acc_norm": 0.6830065359477124,
"acc_norm_stderr": 0.026643278474508755
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6559485530546624,
"acc_stderr": 0.02698147804364804,
"acc_norm": 0.6559485530546624,
"acc_norm_stderr": 0.02698147804364804
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7067901234567902,
"acc_stderr": 0.025329888171900926,
"acc_norm": 0.7067901234567902,
"acc_norm_stderr": 0.025329888171900926
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4397163120567376,
"acc_stderr": 0.02960991207559411,
"acc_norm": 0.4397163120567376,
"acc_norm_stderr": 0.02960991207559411
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4348109517601043,
"acc_stderr": 0.012661233805616295,
"acc_norm": 0.4348109517601043,
"acc_norm_stderr": 0.012661233805616295
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6066176470588235,
"acc_stderr": 0.029674288281311155,
"acc_norm": 0.6066176470588235,
"acc_norm_stderr": 0.029674288281311155
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6356209150326797,
"acc_stderr": 0.019469518221573705,
"acc_norm": 0.6356209150326797,
"acc_norm_stderr": 0.019469518221573705
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6272727272727273,
"acc_stderr": 0.04631381319425465,
"acc_norm": 0.6272727272727273,
"acc_norm_stderr": 0.04631381319425465
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.710204081632653,
"acc_stderr": 0.02904308868330433,
"acc_norm": 0.710204081632653,
"acc_norm_stderr": 0.02904308868330433
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8258706467661692,
"acc_stderr": 0.026814951200421606,
"acc_norm": 0.8258706467661692,
"acc_norm_stderr": 0.026814951200421606
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.03487350880197768,
"acc_norm": 0.86,
"acc_norm_stderr": 0.03487350880197768
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4939759036144578,
"acc_stderr": 0.03892212195333045,
"acc_norm": 0.4939759036144578,
"acc_norm_stderr": 0.03892212195333045
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8421052631578947,
"acc_stderr": 0.02796678585916089,
"acc_norm": 0.8421052631578947,
"acc_norm_stderr": 0.02796678585916089
},
"harness|truthfulqa:mc|0": {
"mc1": 0.25703794369645044,
"mc1_stderr": 0.01529807750948508,
"mc2": 0.4153002055665266,
"mc2_stderr": 0.014702058713161457
},
"harness|winogrande|5": {
"acc": 0.7805840568271507,
"acc_stderr": 0.011631268360607778
},
"harness|gsm8k|5": {
"acc": 0.35178165276724793,
"acc_stderr": 0.01315344602353602
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_CallComply__Starling-LM-11B-alpha | [
"region:us"
] | 2024-01-14T22:53:11+00:00 | {"pretty_name": "Evaluation run of CallComply/Starling-LM-11B-alpha", "dataset_summary": "Dataset automatically created during the evaluation run of model [CallComply/Starling-LM-11B-alpha](https://huggingface.co/CallComply/Starling-LM-11B-alpha) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CallComply__Starling-LM-11B-alpha\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-14T22:50:55.626486](https://huggingface.co/datasets/open-llm-leaderboard/details_CallComply__Starling-LM-11B-alpha/blob/main/results_2024-01-14T22-50-55.626486.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6124497978149351,\n \"acc_stderr\": 0.032857819921299845,\n \"acc_norm\": 0.618390298674969,\n \"acc_norm_stderr\": 0.03352975999467289,\n \"mc1\": 0.25703794369645044,\n \"mc1_stderr\": 0.01529807750948508,\n \"mc2\": 0.4153002055665266,\n \"mc2_stderr\": 0.014702058713161457\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5639931740614335,\n \"acc_stderr\": 0.014491225699230916,\n \"acc_norm\": 0.6126279863481229,\n \"acc_norm_stderr\": 0.01423587248790987\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6105357498506274,\n \"acc_stderr\": 0.0048663222583359665,\n \"acc_norm\": 0.8198566022704641,\n \"acc_norm_stderr\": 0.0038352143402103785\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5703703703703704,\n \"acc_stderr\": 0.042763494943765995,\n \"acc_norm\": 0.5703703703703704,\n \"acc_norm_stderr\": 0.042763494943765995\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6776315789473685,\n \"acc_stderr\": 0.03803510248351585,\n \"acc_norm\": 0.6776315789473685,\n \"acc_norm_stderr\": 0.03803510248351585\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6830188679245283,\n \"acc_stderr\": 0.028637235639800893,\n \"acc_norm\": 0.6830188679245283,\n \"acc_norm_stderr\": 0.028637235639800893\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6597222222222222,\n \"acc_stderr\": 0.039621355734862175,\n \"acc_norm\": 0.6597222222222222,\n \"acc_norm_stderr\": 0.039621355734862175\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.653179190751445,\n \"acc_stderr\": 0.036291466701596636,\n \"acc_norm\": 0.653179190751445,\n \"acc_norm_stderr\": 0.036291466701596636\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4117647058823529,\n \"acc_stderr\": 0.04897104952726366,\n \"acc_norm\": 0.4117647058823529,\n \"acc_norm_stderr\": 0.04897104952726366\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.502127659574468,\n \"acc_stderr\": 0.032685726586674915,\n \"acc_norm\": 0.502127659574468,\n \"acc_norm_stderr\": 0.032685726586674915\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.41228070175438597,\n \"acc_stderr\": 0.046306532033665956,\n \"acc_norm\": 0.41228070175438597,\n \"acc_norm_stderr\": 0.046306532033665956\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.593103448275862,\n \"acc_stderr\": 0.04093793981266236,\n \"acc_norm\": 0.593103448275862,\n \"acc_norm_stderr\": 0.04093793981266236\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.41005291005291006,\n \"acc_stderr\": 0.025331202438944444,\n \"acc_norm\": 0.41005291005291006,\n \"acc_norm_stderr\": 0.025331202438944444\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4603174603174603,\n \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.4603174603174603,\n \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7580645161290323,\n \"acc_stderr\": 0.024362599693031086,\n \"acc_norm\": 0.7580645161290323,\n \"acc_norm_stderr\": 0.024362599693031086\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4630541871921182,\n \"acc_stderr\": 0.035083705204426656,\n \"acc_norm\": 0.4630541871921182,\n \"acc_norm_stderr\": 0.035083705204426656\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.64,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.64,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7272727272727273,\n \"acc_stderr\": 0.0347769116216366,\n \"acc_norm\": 0.7272727272727273,\n \"acc_norm_stderr\": 0.0347769116216366\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7474747474747475,\n \"acc_stderr\": 0.030954055470365897,\n \"acc_norm\": 0.7474747474747475,\n \"acc_norm_stderr\": 0.030954055470365897\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8756476683937824,\n \"acc_stderr\": 0.023814477086593556,\n \"acc_norm\": 0.8756476683937824,\n \"acc_norm_stderr\": 0.023814477086593556\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6410256410256411,\n \"acc_stderr\": 0.02432173848460235,\n \"acc_norm\": 0.6410256410256411,\n \"acc_norm_stderr\": 0.02432173848460235\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3037037037037037,\n \"acc_stderr\": 0.02803792996911499,\n \"acc_norm\": 0.3037037037037037,\n \"acc_norm_stderr\": 0.02803792996911499\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6428571428571429,\n \"acc_stderr\": 0.031124619309328177,\n \"acc_norm\": 0.6428571428571429,\n \"acc_norm_stderr\": 0.031124619309328177\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3708609271523179,\n \"acc_stderr\": 0.03943966699183629,\n \"acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.03943966699183629\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8146788990825689,\n \"acc_stderr\": 0.01665927970029582,\n \"acc_norm\": 0.8146788990825689,\n \"acc_norm_stderr\": 0.01665927970029582\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4583333333333333,\n \"acc_stderr\": 0.033981108902946366,\n \"acc_norm\": 0.4583333333333333,\n \"acc_norm_stderr\": 0.033981108902946366\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7598039215686274,\n \"acc_stderr\": 0.02998373305591361,\n \"acc_norm\": 0.7598039215686274,\n \"acc_norm_stderr\": 0.02998373305591361\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7848101265822784,\n \"acc_stderr\": 0.02675082699467617,\n \"acc_norm\": 0.7848101265822784,\n \"acc_norm_stderr\": 0.02675082699467617\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6412556053811659,\n \"acc_stderr\": 0.03219079200419996,\n \"acc_norm\": 0.6412556053811659,\n \"acc_norm_stderr\": 0.03219079200419996\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7633587786259542,\n \"acc_stderr\": 0.03727673575596914,\n \"acc_norm\": 0.7633587786259542,\n \"acc_norm_stderr\": 0.03727673575596914\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.768595041322314,\n \"acc_stderr\": 0.038498560987940876,\n \"acc_norm\": 0.768595041322314,\n \"acc_norm_stderr\": 0.038498560987940876\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.04557239513497751,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.04557239513497751\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7116564417177914,\n \"acc_stderr\": 0.035590395316173425,\n \"acc_norm\": 0.7116564417177914,\n \"acc_norm_stderr\": 0.035590395316173425\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4642857142857143,\n \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.4642857142857143,\n \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8058252427184466,\n \"acc_stderr\": 0.03916667762822584,\n \"acc_norm\": 0.8058252427184466,\n \"acc_norm_stderr\": 0.03916667762822584\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8504273504273504,\n \"acc_stderr\": 0.023365051491753715,\n \"acc_norm\": 0.8504273504273504,\n \"acc_norm_stderr\": 0.023365051491753715\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8020434227330779,\n \"acc_stderr\": 0.014248873549217582,\n \"acc_norm\": 0.8020434227330779,\n \"acc_norm_stderr\": 0.014248873549217582\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6907514450867052,\n \"acc_stderr\": 0.02488314057007176,\n \"acc_norm\": 0.6907514450867052,\n \"acc_norm_stderr\": 0.02488314057007176\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4245810055865922,\n \"acc_stderr\": 0.01653117099327889,\n \"acc_norm\": 0.4245810055865922,\n \"acc_norm_stderr\": 0.01653117099327889\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6830065359477124,\n \"acc_stderr\": 0.026643278474508755,\n \"acc_norm\": 0.6830065359477124,\n \"acc_norm_stderr\": 0.026643278474508755\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6559485530546624,\n \"acc_stderr\": 0.02698147804364804,\n \"acc_norm\": 0.6559485530546624,\n \"acc_norm_stderr\": 0.02698147804364804\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7067901234567902,\n \"acc_stderr\": 0.025329888171900926,\n \"acc_norm\": 0.7067901234567902,\n \"acc_norm_stderr\": 0.025329888171900926\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4397163120567376,\n \"acc_stderr\": 0.02960991207559411,\n \"acc_norm\": 0.4397163120567376,\n \"acc_norm_stderr\": 0.02960991207559411\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4348109517601043,\n \"acc_stderr\": 0.012661233805616295,\n \"acc_norm\": 0.4348109517601043,\n \"acc_norm_stderr\": 0.012661233805616295\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6066176470588235,\n \"acc_stderr\": 0.029674288281311155,\n \"acc_norm\": 0.6066176470588235,\n \"acc_norm_stderr\": 0.029674288281311155\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6356209150326797,\n \"acc_stderr\": 0.019469518221573705,\n \"acc_norm\": 0.6356209150326797,\n \"acc_norm_stderr\": 0.019469518221573705\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6272727272727273,\n \"acc_stderr\": 0.04631381319425465,\n \"acc_norm\": 0.6272727272727273,\n \"acc_norm_stderr\": 0.04631381319425465\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.710204081632653,\n \"acc_stderr\": 0.02904308868330433,\n \"acc_norm\": 0.710204081632653,\n \"acc_norm_stderr\": 0.02904308868330433\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8258706467661692,\n \"acc_stderr\": 0.026814951200421606,\n \"acc_norm\": 0.8258706467661692,\n \"acc_norm_stderr\": 0.026814951200421606\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.86,\n \"acc_stderr\": 0.03487350880197768,\n \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.03487350880197768\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4939759036144578,\n \"acc_stderr\": 0.03892212195333045,\n \"acc_norm\": 0.4939759036144578,\n \"acc_norm_stderr\": 0.03892212195333045\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8421052631578947,\n \"acc_stderr\": 0.02796678585916089,\n \"acc_norm\": 0.8421052631578947,\n \"acc_norm_stderr\": 0.02796678585916089\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.25703794369645044,\n \"mc1_stderr\": 0.01529807750948508,\n \"mc2\": 0.4153002055665266,\n \"mc2_stderr\": 0.014702058713161457\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7805840568271507,\n \"acc_stderr\": 0.011631268360607778\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.35178165276724793,\n \"acc_stderr\": 0.01315344602353602\n }\n}\n```", "repo_url": "https://huggingface.co/CallComply/Starling-LM-11B-alpha", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|arc:challenge|25_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|gsm8k|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hellaswag|10_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T22-50-55.626486.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["**/details_harness|winogrande|5_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-14T22-50-55.626486.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_14T22_50_55.626486", "path": ["results_2024-01-14T22-50-55.626486.parquet"]}, {"split": "latest", "path": ["results_2024-01-14T22-50-55.626486.parquet"]}]}]} | 2024-01-14T22:53:32+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of CallComply/Starling-LM-11B-alpha
Dataset automatically created during the evaluation run of model CallComply/Starling-LM-11B-alpha on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-14T22:50:55.626486(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of CallComply/Starling-LM-11B-alpha\n\n\n\nDataset automatically created during the evaluation run of model CallComply/Starling-LM-11B-alpha on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T22:50:55.626486(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of CallComply/Starling-LM-11B-alpha\n\n\n\nDataset automatically created during the evaluation run of model CallComply/Starling-LM-11B-alpha on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T22:50:55.626486(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
a76e08334dc92d55dfaeae38e73b81010ee78ae2 |
# Dataset Card for Evaluation run of cloudyu/Yi-34Bx3-MoE-90B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [cloudyu/Yi-34Bx3-MoE-90B](https://huggingface.co/cloudyu/Yi-34Bx3-MoE-90B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_cloudyu__Yi-34Bx3-MoE-90B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-14T23:01:35.520046](https://huggingface.co/datasets/open-llm-leaderboard/details_cloudyu__Yi-34Bx3-MoE-90B/blob/main/results_2024-01-14T23-01-35.520046.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.770922119161067,
"acc_stderr": 0.027863740601296195,
"acc_norm": 0.774340723628372,
"acc_norm_stderr": 0.02839947094621756,
"mc1": 0.49326805385556916,
"mc1_stderr": 0.017501914492655386,
"mc2": 0.6631117489702718,
"mc2_stderr": 0.01453284217897903
},
"harness|arc:challenge|25": {
"acc": 0.6723549488054608,
"acc_stderr": 0.01371584794071934,
"acc_norm": 0.7090443686006825,
"acc_norm_stderr": 0.01327307786590759
},
"harness|hellaswag|10": {
"acc": 0.6586337382991436,
"acc_stderr": 0.004731989816563666,
"acc_norm": 0.8533160724955188,
"acc_norm_stderr": 0.003530675014892315
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.03785714465066653,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.03785714465066653
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.9013157894736842,
"acc_stderr": 0.02427022773752271,
"acc_norm": 0.9013157894736842,
"acc_norm_stderr": 0.02427022773752271
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.8037735849056604,
"acc_stderr": 0.024442388131100806,
"acc_norm": 0.8037735849056604,
"acc_norm_stderr": 0.024442388131100806
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8888888888888888,
"acc_stderr": 0.026280550932848087,
"acc_norm": 0.8888888888888888,
"acc_norm_stderr": 0.026280550932848087
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.65,
"acc_stderr": 0.04793724854411019,
"acc_norm": 0.65,
"acc_norm_stderr": 0.04793724854411019
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.45,
"acc_stderr": 0.04999999999999999,
"acc_norm": 0.45,
"acc_norm_stderr": 0.04999999999999999
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.7456647398843931,
"acc_stderr": 0.0332055644308557,
"acc_norm": 0.7456647398843931,
"acc_norm_stderr": 0.0332055644308557
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.5980392156862745,
"acc_stderr": 0.04878608714466996,
"acc_norm": 0.5980392156862745,
"acc_norm_stderr": 0.04878608714466996
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.81,
"acc_stderr": 0.03942772444036624,
"acc_norm": 0.81,
"acc_norm_stderr": 0.03942772444036624
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.8,
"acc_stderr": 0.0261488180184245,
"acc_norm": 0.8,
"acc_norm_stderr": 0.0261488180184245
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.6052631578947368,
"acc_stderr": 0.045981880578165414,
"acc_norm": 0.6052631578947368,
"acc_norm_stderr": 0.045981880578165414
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.7793103448275862,
"acc_stderr": 0.03455930201924814,
"acc_norm": 0.7793103448275862,
"acc_norm_stderr": 0.03455930201924814
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.7195767195767195,
"acc_stderr": 0.023135287974325618,
"acc_norm": 0.7195767195767195,
"acc_norm_stderr": 0.023135287974325618
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5873015873015873,
"acc_stderr": 0.04403438954768176,
"acc_norm": 0.5873015873015873,
"acc_norm_stderr": 0.04403438954768176
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.57,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.57,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.9,
"acc_stderr": 0.017066403719657255,
"acc_norm": 0.9,
"acc_norm_stderr": 0.017066403719657255
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.6502463054187192,
"acc_stderr": 0.03355400904969567,
"acc_norm": 0.6502463054187192,
"acc_norm_stderr": 0.03355400904969567
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.81,
"acc_stderr": 0.03942772444036625,
"acc_norm": 0.81,
"acc_norm_stderr": 0.03942772444036625
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8727272727272727,
"acc_stderr": 0.026024657651656187,
"acc_norm": 0.8727272727272727,
"acc_norm_stderr": 0.026024657651656187
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.9393939393939394,
"acc_stderr": 0.01699999492742161,
"acc_norm": 0.9393939393939394,
"acc_norm_stderr": 0.01699999492742161
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9689119170984456,
"acc_stderr": 0.012525310625527033,
"acc_norm": 0.9689119170984456,
"acc_norm_stderr": 0.012525310625527033
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.823076923076923,
"acc_stderr": 0.01934807017439699,
"acc_norm": 0.823076923076923,
"acc_norm_stderr": 0.01934807017439699
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.45555555555555555,
"acc_stderr": 0.03036486250482443,
"acc_norm": 0.45555555555555555,
"acc_norm_stderr": 0.03036486250482443
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.8529411764705882,
"acc_stderr": 0.023005459446673957,
"acc_norm": 0.8529411764705882,
"acc_norm_stderr": 0.023005459446673957
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.5099337748344371,
"acc_stderr": 0.04081677107248436,
"acc_norm": 0.5099337748344371,
"acc_norm_stderr": 0.04081677107248436
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9174311926605505,
"acc_stderr": 0.011800361363016581,
"acc_norm": 0.9174311926605505,
"acc_norm_stderr": 0.011800361363016581
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6712962962962963,
"acc_stderr": 0.032036140846700596,
"acc_norm": 0.6712962962962963,
"acc_norm_stderr": 0.032036140846700596
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9264705882352942,
"acc_stderr": 0.018318855850089674,
"acc_norm": 0.9264705882352942,
"acc_norm_stderr": 0.018318855850089674
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8987341772151899,
"acc_stderr": 0.019637720526065515,
"acc_norm": 0.8987341772151899,
"acc_norm_stderr": 0.019637720526065515
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7892376681614349,
"acc_stderr": 0.027373095500540193,
"acc_norm": 0.7892376681614349,
"acc_norm_stderr": 0.027373095500540193
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.9007633587786259,
"acc_stderr": 0.02622223517147737,
"acc_norm": 0.9007633587786259,
"acc_norm_stderr": 0.02622223517147737
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8925619834710744,
"acc_stderr": 0.028268812192540627,
"acc_norm": 0.8925619834710744,
"acc_norm_stderr": 0.028268812192540627
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8796296296296297,
"acc_stderr": 0.031457038543062504,
"acc_norm": 0.8796296296296297,
"acc_norm_stderr": 0.031457038543062504
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.8834355828220859,
"acc_stderr": 0.025212327210507094,
"acc_norm": 0.8834355828220859,
"acc_norm_stderr": 0.025212327210507094
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.6339285714285714,
"acc_stderr": 0.04572372358737431,
"acc_norm": 0.6339285714285714,
"acc_norm_stderr": 0.04572372358737431
},
"harness|hendrycksTest-management|5": {
"acc": 0.912621359223301,
"acc_stderr": 0.027960689125970654,
"acc_norm": 0.912621359223301,
"acc_norm_stderr": 0.027960689125970654
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9401709401709402,
"acc_stderr": 0.015537514263253876,
"acc_norm": 0.9401709401709402,
"acc_norm_stderr": 0.015537514263253876
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.88,
"acc_stderr": 0.03265986323710906,
"acc_norm": 0.88,
"acc_norm_stderr": 0.03265986323710906
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.9067688378033205,
"acc_stderr": 0.010397417087292849,
"acc_norm": 0.9067688378033205,
"acc_norm_stderr": 0.010397417087292849
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.8236994219653179,
"acc_stderr": 0.020516425672490714,
"acc_norm": 0.8236994219653179,
"acc_norm_stderr": 0.020516425672490714
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.8011173184357542,
"acc_stderr": 0.013349892983092521,
"acc_norm": 0.8011173184357542,
"acc_norm_stderr": 0.013349892983092521
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.8562091503267973,
"acc_stderr": 0.020091188936043693,
"acc_norm": 0.8562091503267973,
"acc_norm_stderr": 0.020091188936043693
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.8295819935691319,
"acc_stderr": 0.02135534302826405,
"acc_norm": 0.8295819935691319,
"acc_norm_stderr": 0.02135534302826405
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8858024691358025,
"acc_stderr": 0.017696832447213897,
"acc_norm": 0.8858024691358025,
"acc_norm_stderr": 0.017696832447213897
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.6382978723404256,
"acc_stderr": 0.028663820147199485,
"acc_norm": 0.6382978723404256,
"acc_norm_stderr": 0.028663820147199485
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.6029986962190352,
"acc_stderr": 0.012496346982909554,
"acc_norm": 0.6029986962190352,
"acc_norm_stderr": 0.012496346982909554
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.8419117647058824,
"acc_stderr": 0.02216146260806852,
"acc_norm": 0.8419117647058824,
"acc_norm_stderr": 0.02216146260806852
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.8251633986928104,
"acc_stderr": 0.015366167064780644,
"acc_norm": 0.8251633986928104,
"acc_norm_stderr": 0.015366167064780644
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7181818181818181,
"acc_stderr": 0.04309118709946458,
"acc_norm": 0.7181818181818181,
"acc_norm_stderr": 0.04309118709946458
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8408163265306122,
"acc_stderr": 0.02342097206916635,
"acc_norm": 0.8408163265306122,
"acc_norm_stderr": 0.02342097206916635
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.9054726368159204,
"acc_stderr": 0.020687186951534108,
"acc_norm": 0.9054726368159204,
"acc_norm_stderr": 0.020687186951534108
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.92,
"acc_stderr": 0.0272659924344291,
"acc_norm": 0.92,
"acc_norm_stderr": 0.0272659924344291
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5481927710843374,
"acc_stderr": 0.03874371556587952,
"acc_norm": 0.5481927710843374,
"acc_norm_stderr": 0.03874371556587952
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.9005847953216374,
"acc_stderr": 0.022949025579355044,
"acc_norm": 0.9005847953216374,
"acc_norm_stderr": 0.022949025579355044
},
"harness|truthfulqa:mc|0": {
"mc1": 0.49326805385556916,
"mc1_stderr": 0.017501914492655386,
"mc2": 0.6631117489702718,
"mc2_stderr": 0.01453284217897903
},
"harness|winogrande|5": {
"acc": 0.8429360694554064,
"acc_stderr": 0.010226303949598484
},
"harness|gsm8k|5": {
"acc": 0.7285822592873389,
"acc_stderr": 0.01224900202615058
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_cloudyu__Yi-34Bx3-MoE-90B | [
"region:us"
] | 2024-01-14T23:03:51+00:00 | {"pretty_name": "Evaluation run of cloudyu/Yi-34Bx3-MoE-90B", "dataset_summary": "Dataset automatically created during the evaluation run of model [cloudyu/Yi-34Bx3-MoE-90B](https://huggingface.co/cloudyu/Yi-34Bx3-MoE-90B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_cloudyu__Yi-34Bx3-MoE-90B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-14T23:01:35.520046](https://huggingface.co/datasets/open-llm-leaderboard/details_cloudyu__Yi-34Bx3-MoE-90B/blob/main/results_2024-01-14T23-01-35.520046.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.770922119161067,\n \"acc_stderr\": 0.027863740601296195,\n \"acc_norm\": 0.774340723628372,\n \"acc_norm_stderr\": 0.02839947094621756,\n \"mc1\": 0.49326805385556916,\n \"mc1_stderr\": 0.017501914492655386,\n \"mc2\": 0.6631117489702718,\n \"mc2_stderr\": 0.01453284217897903\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6723549488054608,\n \"acc_stderr\": 0.01371584794071934,\n \"acc_norm\": 0.7090443686006825,\n \"acc_norm_stderr\": 0.01327307786590759\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6586337382991436,\n \"acc_stderr\": 0.004731989816563666,\n \"acc_norm\": 0.8533160724955188,\n \"acc_norm_stderr\": 0.003530675014892315\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.7407407407407407,\n \"acc_stderr\": 0.03785714465066653,\n \"acc_norm\": 0.7407407407407407,\n \"acc_norm_stderr\": 0.03785714465066653\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.9013157894736842,\n \"acc_stderr\": 0.02427022773752271,\n \"acc_norm\": 0.9013157894736842,\n \"acc_norm_stderr\": 0.02427022773752271\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.8037735849056604,\n \"acc_stderr\": 0.024442388131100806,\n \"acc_norm\": 0.8037735849056604,\n \"acc_norm_stderr\": 0.024442388131100806\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8888888888888888,\n \"acc_stderr\": 0.026280550932848087,\n \"acc_norm\": 0.8888888888888888,\n \"acc_norm_stderr\": 0.026280550932848087\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.04793724854411019,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.04793724854411019\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.45,\n \"acc_stderr\": 0.04999999999999999,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.04999999999999999\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7456647398843931,\n \"acc_stderr\": 0.0332055644308557,\n \"acc_norm\": 0.7456647398843931,\n \"acc_norm_stderr\": 0.0332055644308557\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.5980392156862745,\n \"acc_stderr\": 0.04878608714466996,\n \"acc_norm\": 0.5980392156862745,\n \"acc_norm_stderr\": 0.04878608714466996\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.81,\n \"acc_stderr\": 0.03942772444036624,\n \"acc_norm\": 0.81,\n \"acc_norm_stderr\": 0.03942772444036624\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.0261488180184245,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.0261488180184245\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.6052631578947368,\n \"acc_stderr\": 0.045981880578165414,\n \"acc_norm\": 0.6052631578947368,\n \"acc_norm_stderr\": 0.045981880578165414\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.7793103448275862,\n \"acc_stderr\": 0.03455930201924814,\n \"acc_norm\": 0.7793103448275862,\n \"acc_norm_stderr\": 0.03455930201924814\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.7195767195767195,\n \"acc_stderr\": 0.023135287974325618,\n \"acc_norm\": 0.7195767195767195,\n \"acc_norm_stderr\": 0.023135287974325618\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5873015873015873,\n \"acc_stderr\": 0.04403438954768176,\n \"acc_norm\": 0.5873015873015873,\n \"acc_norm_stderr\": 0.04403438954768176\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.57,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.57,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.9,\n \"acc_stderr\": 0.017066403719657255,\n \"acc_norm\": 0.9,\n \"acc_norm_stderr\": 0.017066403719657255\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.6502463054187192,\n \"acc_stderr\": 0.03355400904969567,\n \"acc_norm\": 0.6502463054187192,\n \"acc_norm_stderr\": 0.03355400904969567\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.81,\n \"acc_stderr\": 0.03942772444036625,\n \"acc_norm\": 0.81,\n \"acc_norm_stderr\": 0.03942772444036625\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8727272727272727,\n \"acc_stderr\": 0.026024657651656187,\n \"acc_norm\": 0.8727272727272727,\n \"acc_norm_stderr\": 0.026024657651656187\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.9393939393939394,\n \"acc_stderr\": 0.01699999492742161,\n \"acc_norm\": 0.9393939393939394,\n \"acc_norm_stderr\": 0.01699999492742161\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9689119170984456,\n \"acc_stderr\": 0.012525310625527033,\n \"acc_norm\": 0.9689119170984456,\n \"acc_norm_stderr\": 0.012525310625527033\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.823076923076923,\n \"acc_stderr\": 0.01934807017439699,\n \"acc_norm\": 0.823076923076923,\n \"acc_norm_stderr\": 0.01934807017439699\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.45555555555555555,\n \"acc_stderr\": 0.03036486250482443,\n \"acc_norm\": 0.45555555555555555,\n \"acc_norm_stderr\": 0.03036486250482443\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.8529411764705882,\n \"acc_stderr\": 0.023005459446673957,\n \"acc_norm\": 0.8529411764705882,\n \"acc_norm_stderr\": 0.023005459446673957\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.5099337748344371,\n \"acc_stderr\": 0.04081677107248436,\n \"acc_norm\": 0.5099337748344371,\n \"acc_norm_stderr\": 0.04081677107248436\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.9174311926605505,\n \"acc_stderr\": 0.011800361363016581,\n \"acc_norm\": 0.9174311926605505,\n \"acc_norm_stderr\": 0.011800361363016581\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.6712962962962963,\n \"acc_stderr\": 0.032036140846700596,\n \"acc_norm\": 0.6712962962962963,\n \"acc_norm_stderr\": 0.032036140846700596\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.9264705882352942,\n \"acc_stderr\": 0.018318855850089674,\n \"acc_norm\": 0.9264705882352942,\n \"acc_norm_stderr\": 0.018318855850089674\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8987341772151899,\n \"acc_stderr\": 0.019637720526065515,\n \"acc_norm\": 0.8987341772151899,\n \"acc_norm_stderr\": 0.019637720526065515\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7892376681614349,\n \"acc_stderr\": 0.027373095500540193,\n \"acc_norm\": 0.7892376681614349,\n \"acc_norm_stderr\": 0.027373095500540193\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.9007633587786259,\n \"acc_stderr\": 0.02622223517147737,\n \"acc_norm\": 0.9007633587786259,\n \"acc_norm_stderr\": 0.02622223517147737\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8925619834710744,\n \"acc_stderr\": 0.028268812192540627,\n \"acc_norm\": 0.8925619834710744,\n \"acc_norm_stderr\": 0.028268812192540627\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8796296296296297,\n \"acc_stderr\": 0.031457038543062504,\n \"acc_norm\": 0.8796296296296297,\n \"acc_norm_stderr\": 0.031457038543062504\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.8834355828220859,\n \"acc_stderr\": 0.025212327210507094,\n \"acc_norm\": 0.8834355828220859,\n \"acc_norm_stderr\": 0.025212327210507094\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.6339285714285714,\n \"acc_stderr\": 0.04572372358737431,\n \"acc_norm\": 0.6339285714285714,\n \"acc_norm_stderr\": 0.04572372358737431\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.912621359223301,\n \"acc_stderr\": 0.027960689125970654,\n \"acc_norm\": 0.912621359223301,\n \"acc_norm_stderr\": 0.027960689125970654\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9401709401709402,\n \"acc_stderr\": 0.015537514263253876,\n \"acc_norm\": 0.9401709401709402,\n \"acc_norm_stderr\": 0.015537514263253876\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.88,\n \"acc_stderr\": 0.03265986323710906,\n \"acc_norm\": 0.88,\n \"acc_norm_stderr\": 0.03265986323710906\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.9067688378033205,\n \"acc_stderr\": 0.010397417087292849,\n \"acc_norm\": 0.9067688378033205,\n \"acc_norm_stderr\": 0.010397417087292849\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.8236994219653179,\n \"acc_stderr\": 0.020516425672490714,\n \"acc_norm\": 0.8236994219653179,\n \"acc_norm_stderr\": 0.020516425672490714\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.8011173184357542,\n \"acc_stderr\": 0.013349892983092521,\n \"acc_norm\": 0.8011173184357542,\n \"acc_norm_stderr\": 0.013349892983092521\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.8562091503267973,\n \"acc_stderr\": 0.020091188936043693,\n \"acc_norm\": 0.8562091503267973,\n \"acc_norm_stderr\": 0.020091188936043693\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.8295819935691319,\n \"acc_stderr\": 0.02135534302826405,\n \"acc_norm\": 0.8295819935691319,\n \"acc_norm_stderr\": 0.02135534302826405\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.8858024691358025,\n \"acc_stderr\": 0.017696832447213897,\n \"acc_norm\": 0.8858024691358025,\n \"acc_norm_stderr\": 0.017696832447213897\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.6382978723404256,\n \"acc_stderr\": 0.028663820147199485,\n \"acc_norm\": 0.6382978723404256,\n \"acc_norm_stderr\": 0.028663820147199485\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.6029986962190352,\n \"acc_stderr\": 0.012496346982909554,\n \"acc_norm\": 0.6029986962190352,\n \"acc_norm_stderr\": 0.012496346982909554\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.8419117647058824,\n \"acc_stderr\": 0.02216146260806852,\n \"acc_norm\": 0.8419117647058824,\n \"acc_norm_stderr\": 0.02216146260806852\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.8251633986928104,\n \"acc_stderr\": 0.015366167064780644,\n \"acc_norm\": 0.8251633986928104,\n \"acc_norm_stderr\": 0.015366167064780644\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7181818181818181,\n \"acc_stderr\": 0.04309118709946458,\n \"acc_norm\": 0.7181818181818181,\n \"acc_norm_stderr\": 0.04309118709946458\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.8408163265306122,\n \"acc_stderr\": 0.02342097206916635,\n \"acc_norm\": 0.8408163265306122,\n \"acc_norm_stderr\": 0.02342097206916635\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.9054726368159204,\n \"acc_stderr\": 0.020687186951534108,\n \"acc_norm\": 0.9054726368159204,\n \"acc_norm_stderr\": 0.020687186951534108\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.92,\n \"acc_stderr\": 0.0272659924344291,\n \"acc_norm\": 0.92,\n \"acc_norm_stderr\": 0.0272659924344291\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5481927710843374,\n \"acc_stderr\": 0.03874371556587952,\n \"acc_norm\": 0.5481927710843374,\n \"acc_norm_stderr\": 0.03874371556587952\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.9005847953216374,\n \"acc_stderr\": 0.022949025579355044,\n \"acc_norm\": 0.9005847953216374,\n \"acc_norm_stderr\": 0.022949025579355044\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.49326805385556916,\n \"mc1_stderr\": 0.017501914492655386,\n \"mc2\": 0.6631117489702718,\n \"mc2_stderr\": 0.01453284217897903\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8429360694554064,\n \"acc_stderr\": 0.010226303949598484\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7285822592873389,\n \"acc_stderr\": 0.01224900202615058\n }\n}\n```", "repo_url": "https://huggingface.co/cloudyu/Yi-34Bx3-MoE-90B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|arc:challenge|25_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|gsm8k|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hellaswag|10_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-14T23-01-35.520046.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["**/details_harness|winogrande|5_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-14T23-01-35.520046.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_14T23_01_35.520046", "path": ["results_2024-01-14T23-01-35.520046.parquet"]}, {"split": "latest", "path": ["results_2024-01-14T23-01-35.520046.parquet"]}]}]} | 2024-01-14T23:04:13+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of cloudyu/Yi-34Bx3-MoE-90B
Dataset automatically created during the evaluation run of model cloudyu/Yi-34Bx3-MoE-90B on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-14T23:01:35.520046(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of cloudyu/Yi-34Bx3-MoE-90B\n\n\n\nDataset automatically created during the evaluation run of model cloudyu/Yi-34Bx3-MoE-90B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T23:01:35.520046(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of cloudyu/Yi-34Bx3-MoE-90B\n\n\n\nDataset automatically created during the evaluation run of model cloudyu/Yi-34Bx3-MoE-90B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-14T23:01:35.520046(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
a8e719bd0bb9c28f5718f98a536b9e412f2d07ae |
# open-english-wordnet-synset-2023
Open English WordNet (2023)
## Dataset Details
### Dataset Description
Open English WordNet is a lexical network of the English language grouping words into synsets and linking them according to relationships such as hypernymy, antonymy and meronymy. It is intended to be used in natural language processing applications and provides deep lexical information about the English language as a graph.
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/globalwordnet/english-wordnet
- **Paper:** John P. McCrae, Alexandre Rademaker, Francis Bond, Ewa Rudnicka and Christiane Fellbaum (2019) [English WordNet 2019 – An Open-Source WordNet for English](https://aclanthology.org/2019.gwc-1.31/). In Proceedings of the 10th Global WordNet Conference – GWC 2019, Wrocław
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
```bibtex
@inproceedings{mccrae-etal-2019-english,
title = "{E}nglish {W}ord{N}et 2019 {--} An Open-Source {W}ord{N}et for {E}nglish",
author = "McCrae, John P. and
Rademaker, Alexandre and
Bond, Francis and
Rudnicka, Ewa and
Fellbaum, Christiane",
editor = "Vossen, Piek and
Fellbaum, Christiane",
booktitle = "Proceedings of the 10th Global Wordnet Conference",
month = jul,
year = "2019",
address = "Wroclaw, Poland",
publisher = "Global Wordnet Association",
url = "https://aclanthology.org/2019.gwc-1.31",
pages = "245--252",
abstract = "We describe the release of a new wordnet for English based on the Princeton WordNet, but now developed under an open-source model. In particular, this version of WordNet, which we call English WordNet 2019, which has been developed by multiple people around the world through GitHub, fixes many errors in previous wordnets for English. We give some details of the changes that have been made in this version and give some perspectives about likely future changes that will be made as this project continues to evolve.",
}
``` | jon-tow/open-english-wordnet-synset-2023 | [
"license:cc-by-4.0",
"region:us"
] | 2024-01-14T23:07:28+00:00 | {"license": "cc-by-4.0", "configs": [{"config_name": "default", "data_files": "open_english_wordnet_2023.jsonl"}]} | 2024-01-15T04:12:09+00:00 | [] | [] | TAGS
#license-cc-by-4.0 #region-us
|
# open-english-wordnet-synset-2023
Open English WordNet (2023)
## Dataset Details
### Dataset Description
Open English WordNet is a lexical network of the English language grouping words into synsets and linking them according to relationships such as hypernymy, antonymy and meronymy. It is intended to be used in natural language processing applications and provides deep lexical information about the English language as a graph.
### Dataset Sources
- Repository: URL
- Paper: John P. McCrae, Alexandre Rademaker, Francis Bond, Ewa Rudnicka and Christiane Fellbaum (2019) English WordNet 2019 – An Open-Source WordNet for English. In Proceedings of the 10th Global WordNet Conference – GWC 2019, Wrocław
| [
"# open-english-wordnet-synset-2023\n\nOpen English WordNet (2023)",
"## Dataset Details",
"### Dataset Description\n\nOpen English WordNet is a lexical network of the English language grouping words into synsets and linking them according to relationships such as hypernymy, antonymy and meronymy. It is intended to be used in natural language processing applications and provides deep lexical information about the English language as a graph.",
"### Dataset Sources\n\n\n\n- Repository: URL\n- Paper: John P. McCrae, Alexandre Rademaker, Francis Bond, Ewa Rudnicka and Christiane Fellbaum (2019) English WordNet 2019 – An Open-Source WordNet for English. In Proceedings of the 10th Global WordNet Conference – GWC 2019, Wrocław"
] | [
"TAGS\n#license-cc-by-4.0 #region-us \n",
"# open-english-wordnet-synset-2023\n\nOpen English WordNet (2023)",
"## Dataset Details",
"### Dataset Description\n\nOpen English WordNet is a lexical network of the English language grouping words into synsets and linking them according to relationships such as hypernymy, antonymy and meronymy. It is intended to be used in natural language processing applications and provides deep lexical information about the English language as a graph.",
"### Dataset Sources\n\n\n\n- Repository: URL\n- Paper: John P. McCrae, Alexandre Rademaker, Francis Bond, Ewa Rudnicka and Christiane Fellbaum (2019) English WordNet 2019 – An Open-Source WordNet for English. In Proceedings of the 10th Global WordNet Conference – GWC 2019, Wrocław"
] |
cb4b3afe726b52f57490629bb43606760a987c36 | Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the [Spider Dataset](https://yale-lily.github.io/spider).
Code explanations and links for the model's checkpoints and datasets are on Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql)
Here is the [Hugging Face collection](https://huggingface.co/collections/Marchanjo/mrat-sql-65a671743bb0e70b416561f6), you can download the model's checkpoints and datasets, but to understand is better to go to Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
[paper published in Springer-Nature - International Journal of Information Technology](https://doi.org/10.1007/s41870-023-01342-3), [here the SharedIt link](https://rdcu.be/dff19). [here the pre-print in arXiv](https://arxiv.org/abs/2306.14256).
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
BRACIS 2021: [paper published in Springer Lecture Notes in Computer Science](https://link.springer.com/chapter/10.1007%2F978-3-030-91699-2_35), [here the pre-print in arXiv](https://arxiv.org/abs/2110.03546).
Based on: RAT-SQL+GAP: [Github](https://github.com/awslabs/gap-text2sql). Paper: [AAAI 2021 paper](https://arxiv.org/abs/2012.10309)
| Marchanjo/spider-FIT-en-extra-3enr-1enb | [
"license:cc-by-sa-4.0",
"arxiv:2306.14256",
"arxiv:2110.03546",
"arxiv:2012.10309",
"region:us"
] | 2024-01-14T23:15:12+00:00 | {"license": "cc-by-sa-4.0"} | 2024-01-16T12:42:35+00:00 | [
"2306.14256",
"2110.03546",
"2012.10309"
] | [] | TAGS
#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us
| Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the Spider Dataset.
Code explanations and links for the model's checkpoints and datasets are on Github mRAT-SQL
Here is the Hugging Face collection, you can download the model's checkpoints and datasets, but to understand is better to go to Github mRAT-SQL.
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.
paper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.
BRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.
Based on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper
| [
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] | [
"TAGS\n#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us \n",
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] |
38760e8b8b09404f15ddfc3b26ab6b8101dc9e1c | Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the [Spider Dataset](https://yale-lily.github.io/spider).
Code explanations and links for the model's checkpoints and datasets are on Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql)
Here is the [Hugging Face collection](https://huggingface.co/collections/Marchanjo/mrat-sql-65a671743bb0e70b416561f6), you can download the model's checkpoints and datasets, but to understand is better to go to Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
[paper published in Springer-Nature - International Journal of Information Technology](https://doi.org/10.1007/s41870-023-01342-3), [here the SharedIt link](https://rdcu.be/dff19). [here the pre-print in arXiv](https://arxiv.org/abs/2306.14256).
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
BRACIS 2021: [paper published in Springer Lecture Notes in Computer Science](https://link.springer.com/chapter/10.1007%2F978-3-030-91699-2_35), [here the pre-print in arXiv](https://arxiv.org/abs/2110.03546).
Based on: RAT-SQL+GAP: [Github](https://github.com/awslabs/gap-text2sql). Paper: [AAAI 2021 paper](https://arxiv.org/abs/2012.10309) | Marchanjo/spider-FIT-pt | [
"license:cc-by-sa-4.0",
"arxiv:2306.14256",
"arxiv:2110.03546",
"arxiv:2012.10309",
"region:us"
] | 2024-01-14T23:19:56+00:00 | {"license": "cc-by-sa-4.0"} | 2024-01-16T12:42:01+00:00 | [
"2306.14256",
"2110.03546",
"2012.10309"
] | [] | TAGS
#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us
| Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the Spider Dataset.
Code explanations and links for the model's checkpoints and datasets are on Github mRAT-SQL
Here is the Hugging Face collection, you can download the model's checkpoints and datasets, but to understand is better to go to Github mRAT-SQL.
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.
paper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.
BRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.
Based on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper | [
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] | [
"TAGS\n#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us \n",
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] |
649d7aa3b22a008a23ee9eada18bfcb9696773f7 | Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the [Spider Dataset](https://yale-lily.github.io/spider).
Code explanations and links for the model's checkpoints and datasets are on Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql)
Here is the [Hugging Face collection](https://huggingface.co/collections/Marchanjo/mrat-sql-65a671743bb0e70b416561f6), you can download the model's checkpoints and datasets, but to understand is better to go to Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
[paper published in Springer-Nature - International Journal of Information Technology](https://doi.org/10.1007/s41870-023-01342-3), [here the SharedIt link](https://rdcu.be/dff19). [here the pre-print in arXiv](https://arxiv.org/abs/2306.14256).
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
BRACIS 2021: [paper published in Springer Lecture Notes in Computer Science](https://link.springer.com/chapter/10.1007%2F978-3-030-91699-2_35), [here the pre-print in arXiv](https://arxiv.org/abs/2110.03546).
Based on: RAT-SQL+GAP: [Github](https://github.com/awslabs/gap-text2sql). Paper: [AAAI 2021 paper](https://arxiv.org/abs/2012.10309) | Marchanjo/spider-FIT-es | [
"license:cc-by-sa-4.0",
"arxiv:2306.14256",
"arxiv:2110.03546",
"arxiv:2012.10309",
"region:us"
] | 2024-01-14T23:23:57+00:00 | {"license": "cc-by-sa-4.0"} | 2024-01-16T12:40:08+00:00 | [
"2306.14256",
"2110.03546",
"2012.10309"
] | [] | TAGS
#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us
| Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the Spider Dataset.
Code explanations and links for the model's checkpoints and datasets are on Github mRAT-SQL
Here is the Hugging Face collection, you can download the model's checkpoints and datasets, but to understand is better to go to Github mRAT-SQL.
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.
paper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.
BRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.
Based on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper | [
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] | [
"TAGS\n#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us \n",
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] |
63ea2584bc2e0ad19e962927a4c866399b00263a | Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the [Spider Dataset](https://yale-lily.github.io/spider).
Code explanations and links for the model's checkpoints and datasets are on Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql)
Here is the [Hugging Face collection](https://huggingface.co/collections/Marchanjo/mrat-sql-65a671743bb0e70b416561f6), you can download the model's checkpoints and datasets, but to understand is better to go to Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
[paper published in Springer-Nature - International Journal of Information Technology](https://doi.org/10.1007/s41870-023-01342-3), [here the SharedIt link](https://rdcu.be/dff19). [here the pre-print in arXiv](https://arxiv.org/abs/2306.14256).
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
BRACIS 2021: [paper published in Springer Lecture Notes in Computer Science](https://link.springer.com/chapter/10.1007%2F978-3-030-91699-2_35), [here the pre-print in arXiv](https://arxiv.org/abs/2110.03546).
Based on: RAT-SQL+GAP: [Github](https://github.com/awslabs/gap-text2sql). Paper: [AAAI 2021 paper](https://arxiv.org/abs/2012.10309) | Marchanjo/spider-FIT-fr | [
"license:cc-by-sa-4.0",
"arxiv:2306.14256",
"arxiv:2110.03546",
"arxiv:2012.10309",
"region:us"
] | 2024-01-14T23:26:40+00:00 | {"license": "cc-by-sa-4.0"} | 2024-01-16T12:40:24+00:00 | [
"2306.14256",
"2110.03546",
"2012.10309"
] | [] | TAGS
#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us
| Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the Spider Dataset.
Code explanations and links for the model's checkpoints and datasets are on Github mRAT-SQL
Here is the Hugging Face collection, you can download the model's checkpoints and datasets, but to understand is better to go to Github mRAT-SQL.
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.
paper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.
BRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.
Based on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper | [
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] | [
"TAGS\n#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us \n",
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] |
fd425d27b2792e7766b856a248584475fafceb96 | Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the [Spider Dataset](https://yale-lily.github.io/spider).
Code explanations and links for the model's checkpoints and datasets are on Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql)
Here is the [Hugging Face collection](https://huggingface.co/collections/Marchanjo/mrat-sql-65a671743bb0e70b416561f6), you can download the model's checkpoints and datasets, but to understand is better to go to Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
[paper published in Springer-Nature - International Journal of Information Technology](https://doi.org/10.1007/s41870-023-01342-3), [here the SharedIt link](https://rdcu.be/dff19). [here the pre-print in arXiv](https://arxiv.org/abs/2306.14256).
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
BRACIS 2021: [paper published in Springer Lecture Notes in Computer Science](https://link.springer.com/chapter/10.1007%2F978-3-030-91699-2_35), [here the pre-print in arXiv](https://arxiv.org/abs/2110.03546).
Based on: RAT-SQL+GAP: [Github](https://github.com/awslabs/gap-text2sql). Paper: [AAAI 2021 paper](https://arxiv.org/abs/2012.10309) | Marchanjo/spider-FIT-en-pt-es-fr | [
"license:cc-by-sa-4.0",
"arxiv:2306.14256",
"arxiv:2110.03546",
"arxiv:2012.10309",
"region:us"
] | 2024-01-14T23:29:44+00:00 | {"license": "cc-by-sa-4.0"} | 2024-01-16T12:40:40+00:00 | [
"2306.14256",
"2110.03546",
"2012.10309"
] | [] | TAGS
#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us
| Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the Spider Dataset.
Code explanations and links for the model's checkpoints and datasets are on Github mRAT-SQL
Here is the Hugging Face collection, you can download the model's checkpoints and datasets, but to understand is better to go to Github mRAT-SQL.
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.
paper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.
BRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.
Based on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper | [
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] | [
"TAGS\n#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us \n",
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] |
0bb43ab6c704d5468568a112933e496c1a4ef8cc | Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the [Spider Dataset](https://yale-lily.github.io/spider).
Code explanations and links for the model's checkpoints and datasets are on Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql)
Here is the [Hugging Face collection](https://huggingface.co/collections/Marchanjo/mrat-sql-65a671743bb0e70b416561f6), you can download the model's checkpoints and datasets, but to understand is better to go to Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
[paper published in Springer-Nature - International Journal of Information Technology](https://doi.org/10.1007/s41870-023-01342-3), [here the SharedIt link](https://rdcu.be/dff19). [here the pre-print in arXiv](https://arxiv.org/abs/2306.14256).
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
BRACIS 2021: [paper published in Springer Lecture Notes in Computer Science](https://link.springer.com/chapter/10.1007%2F978-3-030-91699-2_35), [here the pre-print in arXiv](https://arxiv.org/abs/2110.03546).
Based on: RAT-SQL+GAP: [Github](https://github.com/awslabs/gap-text2sql). Paper: [AAAI 2021 paper](https://arxiv.org/abs/2012.10309) | Marchanjo/spider-FIT-en-pt-es-fr-extra-3enr-3ptr-3esr-3frr | [
"license:cc-by-sa-4.0",
"arxiv:2306.14256",
"arxiv:2110.03546",
"arxiv:2012.10309",
"region:us"
] | 2024-01-14T23:33:15+00:00 | {"license": "cc-by-sa-4.0"} | 2024-01-16T12:41:16+00:00 | [
"2306.14256",
"2110.03546",
"2012.10309"
] | [] | TAGS
#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us
| Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the Spider Dataset.
Code explanations and links for the model's checkpoints and datasets are on Github mRAT-SQL
Here is the Hugging Face collection, you can download the model's checkpoints and datasets, but to understand is better to go to Github mRAT-SQL.
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.
paper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.
BRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.
Based on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper | [
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] | [
"TAGS\n#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us \n",
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] |
4296e705777ea41a4fb2fb943458ab97ee7cc324 | Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the [Spider Dataset](https://yale-lily.github.io/spider).
Code explanations and links for the model's checkpoints and datasets are on Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql)
Here is the [Hugging Face collection](https://huggingface.co/collections/Marchanjo/mrat-sql-65a671743bb0e70b416561f6), you can download the model's checkpoints and datasets, but to understand is better to go to Github [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
[paper published in Springer-Nature - International Journal of Information Technology](https://doi.org/10.1007/s41870-023-01342-3), [here the SharedIt link](https://rdcu.be/dff19). [here the pre-print in arXiv](https://arxiv.org/abs/2306.14256).
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: [mRAT-SQL](https://github.com/C4AI/gap-text2sql).
BRACIS 2021: [paper published in Springer Lecture Notes in Computer Science](https://link.springer.com/chapter/10.1007%2F978-3-030-91699-2_35), [here the pre-print in arXiv](https://arxiv.org/abs/2110.03546).
Based on: RAT-SQL+GAP: [Github](https://github.com/awslabs/gap-text2sql). Paper: [AAAI 2021 paper](https://arxiv.org/abs/2012.10309) | Marchanjo/spider-FIT-en-pt-es-fr-enr-enb | [
"license:cc-by-sa-4.0",
"arxiv:2306.14256",
"arxiv:2110.03546",
"arxiv:2012.10309",
"region:us"
] | 2024-01-14T23:37:44+00:00 | {"license": "cc-by-sa-4.0"} | 2024-01-16T12:41:32+00:00 | [
"2306.14256",
"2110.03546",
"2012.10309"
] | [] | TAGS
#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us
| Distributed under the Creative Commons-by-sa-4.0 respecting the ShareAlike of the Spider Dataset.
Code explanations and links for the model's checkpoints and datasets are on Github mRAT-SQL
Here is the Hugging Face collection, you can download the model's checkpoints and datasets, but to understand is better to go to Github mRAT-SQL.
# mRAT-SQL-FIT
## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention
Marcelo Archanjo Jose, Fabio Gagliardi Cozman
Long sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.
paper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.
# mRAT-SQL+GAP
## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer
Marcelo Archanjo José, Fabio Gagliardi Cozman
The translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.
BRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.
Based on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper | [
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] | [
"TAGS\n#license-cc-by-sa-4.0 #arxiv-2306.14256 #arxiv-2110.03546 #arxiv-2012.10309 #region-us \n",
"# mRAT-SQL-FIT",
"## A Multilingual Translator to SQL with Database Schema Pruning to Improve Self-Attention\nMarcelo Archanjo Jose, Fabio Gagliardi Cozman\n\nLong sequences of text are challenging in the context of transformers, due to quadratic memory increase in the self-attention mechanism. As this issue directly affects the translation from natural language to SQL queries (as techniques usually take as input a concatenated text with the question and the database schema), we present techniques that allow long text sequences to be handled by transformers with up to 512 input tokens. We propose a training process with database schema pruning (removal of tables and columns names that are useless for the query of interest). In addition, we used a multilingual approach with the mT5-large model fine-tuned with a data-augmented Spider dataset in four languages simultaneously: English, Portuguese, Spanish, and French. Our proposed technique used the Spider dataset and increased the exact set match accuracy results from 0.718 to 0.736 in a validation dataset (Dev). Source code, evaluations, and checkpoints are available at: mRAT-SQL.\n\npaper published in Springer-Nature - International Journal of Information Technology, here the SharedIt link. here the pre-print in arXiv.",
"# mRAT-SQL+GAP",
"## mRAT-SQL+GAP:A Portuguese Text-to-SQL Transformer\nMarcelo Archanjo José, Fabio Gagliardi Cozman\n\nThe translation of natural language questions to SQL queries has attracted growing attention, in particular in connection with transformers and similar language models. A large number of techniques are geared towards the English language; in this work, we thus investigated translation to SQL when input questions are given in the Portuguese language. To do so, we properly adapted state-of-the-art tools and resources. We changed the RAT-SQL+GAP system by relying on a multilingual BART model (we report tests with other language models), and we produced a translated version of the Spider dataset. Our experiments expose interesting phenomena that arise when non-English languages are targeted; in particular, it is better to train with original and translated training datasets together, even if a single target language is desired. This multilingual BART model fine-tuned with a double-size training dataset (English and Portuguese) achieved 83% of the baseline, making inferences for the Portuguese test dataset. This investigation can help other researchers to produce results in Machine Learning in a language different from English. Our multilingual ready version of RAT-SQL+GAP and the data are available, open-sourced as mRAT-SQL+GAP at: mRAT-SQL.\n\nBRACIS 2021: paper published in Springer Lecture Notes in Computer Science, here the pre-print in arXiv.\n\nBased on: RAT-SQL+GAP: Github. Paper: AAAI 2021 paper"
] |
ae1700982243110f59f761a63bacf19cce5a7fc3 |
# DATACLYSM PATCH 0.0.2: ARXIV
## USE THE NOTEBOOK TO GET STARTED!
https://github.com/somewheresystems/dataclysm
![image/png](/static-proxy?url=https%3A%2F%2Fcdn-uploads.huggingface.co%2Fproduction%2Fuploads%2F62a4a59791cfdc7b365ff5da%2FVwuifFrxpATEAPGOvYOHe.png)
# somewheresystems/dataclysm-wikipedia-titles
This dataset comprises of 3,360,984 English language arXiv papers from the Cornell/arXiv dataset, with two new columns added: title-embeddings and abstract-embeddings. These additional columns were generated using the bge-small-en-v1.5 embeddings model. The dataset was sourced from the Cornell/arXiv GCP bucket's json manifest for arXiv metadata, as of January 14th, 2024 [gs://arxiv-dataset/metadata-v5/arxiv-metadata-oai.json](gs://arxiv-dataset/metadata-v5/arxiv-metadata-oai.json)
# Embeddings Model
We used https://huggingface.co/BAAI/bge-small-en-v1.5 to embed the `title` and `abstract` fields.
## Contact
Please contact [email protected] for inquiries. | somewheresystems/dataclysm-arxiv | [
"size_categories:1M<n<10M",
"language:en",
"license:cc0-1.0",
"arxiv",
"science",
"region:us"
] | 2024-01-14T23:51:58+00:00 | {"language": ["en"], "license": "cc0-1.0", "size_categories": ["1M<n<10M"], "pretty_name": "dataclysm-arxiv", "tags": ["arxiv", "science"]} | 2024-02-11T22:30:09+00:00 | [] | [
"en"
] | TAGS
#size_categories-1M<n<10M #language-English #license-cc0-1.0 #arxiv #science #region-us
|
# DATACLYSM PATCH 0.0.2: ARXIV
## USE THE NOTEBOOK TO GET STARTED!
URL
!image/png
# somewheresystems/dataclysm-wikipedia-titles
This dataset comprises of 3,360,984 English language arXiv papers from the Cornell/arXiv dataset, with two new columns added: title-embeddings and abstract-embeddings. These additional columns were generated using the bge-small-en-v1.5 embeddings model. The dataset was sourced from the Cornell/arXiv GCP bucket's json manifest for arXiv metadata, as of January 14th, 2024 gs://arxiv-dataset/metadata-v5/URL
# Embeddings Model
We used URL to embed the 'title' and 'abstract' fields.
## Contact
Please contact hi@URL for inquiries. | [
"# DATACLYSM PATCH 0.0.2: ARXIV",
"## USE THE NOTEBOOK TO GET STARTED!\nURL\n!image/png",
"# somewheresystems/dataclysm-wikipedia-titles\n\nThis dataset comprises of 3,360,984 English language arXiv papers from the Cornell/arXiv dataset, with two new columns added: title-embeddings and abstract-embeddings. These additional columns were generated using the bge-small-en-v1.5 embeddings model. The dataset was sourced from the Cornell/arXiv GCP bucket's json manifest for arXiv metadata, as of January 14th, 2024 gs://arxiv-dataset/metadata-v5/URL",
"# Embeddings Model\n\nWe used URL to embed the 'title' and 'abstract' fields.",
"## Contact\n\nPlease contact hi@URL for inquiries."
] | [
"TAGS\n#size_categories-1M<n<10M #language-English #license-cc0-1.0 #arxiv #science #region-us \n",
"# DATACLYSM PATCH 0.0.2: ARXIV",
"## USE THE NOTEBOOK TO GET STARTED!\nURL\n!image/png",
"# somewheresystems/dataclysm-wikipedia-titles\n\nThis dataset comprises of 3,360,984 English language arXiv papers from the Cornell/arXiv dataset, with two new columns added: title-embeddings and abstract-embeddings. These additional columns were generated using the bge-small-en-v1.5 embeddings model. The dataset was sourced from the Cornell/arXiv GCP bucket's json manifest for arXiv metadata, as of January 14th, 2024 gs://arxiv-dataset/metadata-v5/URL",
"# Embeddings Model\n\nWe used URL to embed the 'title' and 'abstract' fields.",
"## Contact\n\nPlease contact hi@URL for inquiries."
] |
a3d826cf3c0f0fe71519a3760dd74647273f3fbd |
![Thumbnail](AI-GENERATED.jpg)
# End-To-End TEXT-2-ASMR with Transformers
This repository contains pretrained text2asmr model files, audio files and training+inference notebooks.
## Dataset Details
This unique dataset is tailored for training and deploying text-to-speech (TTS) systems specifically focused on ASMR (Autonomous Sensory Meridian Response) content. It includes a comprehensive collection of pretrained model files, audio files and training code suitable for TTS applications.
### Dataset Description
Inside this dataset, you shall find zipped folders as is follows:
1. **wavs_original:** original wav files as it was converted from the original video
2. **wavs:** original wav files broken into 1 minute chunks
3. **transcripts_original:** transribed scripts of the original wav files
4. **transcripts:** transribed scripts of the files in wav folder
5. **models:** text to spectrogram model trained on Glow-TTS
6. **ljspeech:** alignment files and respective checkpoint models (text to phoneme)
7. **transformer_tts_data.ljspeech**: trained checkpoint models and other files
And the following files:
1. **Glow-TTS.ipynb:** Training and inference code for GlowTTS models
2. **TransformerTTS.ipynb:** Training and inference code for Transformer models
3. **VITS_TTS.ipynb:** Optional code for training VITS models; follows the same format as GlowTTS
4. **metadata_original.csv:** ljspeech formatted transcriptions of wav_original folder; ready for TTS training
5. **metadata.csv:** ljspeech formatted transcriptions of wav folder; ready for TTS training
- **Curated by:** Alosh Denny, Anish S
- **Language(s) (NLP):** English
- **License:** MIT
### Dataset Sources
**Youtube:** Rebeccas ASMR, Nanou ASMR, Gibi ASMR, Cherie Lorraine ASMR, etc.
## Uses
The dataset can be used to train text2spec2mel, text2wav, and/or other end-to-end text-to-speech models.
### Direct Use
Pretrained models can be tested out with the TransformerTTS notebook and the Glow-TTS notebook.
## Dataset Card Authors
Alosh Denny, Anish S
## Dataset Card Contact
[email protected] | aoxo/text2asmr-uncensored | [
"task_categories:text-to-speech",
"task_categories:text-to-audio",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"code",
"music",
"doi:10.57967/hf/1610",
"region:us"
] | 2024-01-15T00:12:07+00:00 | {"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["text-to-speech", "text-to-audio"], "pretty_name": "Text-to-ASMR", "image": ["https://ibb.co/ZzFkfWZ"], "tags": ["code", "music"]} | 2024-02-13T13:32:33+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-to-speech #task_categories-text-to-audio #size_categories-1K<n<10K #language-English #license-mit #code #music #doi-10.57967/hf/1610 #region-us
|
!Thumbnail
# End-To-End TEXT-2-ASMR with Transformers
This repository contains pretrained text2asmr model files, audio files and training+inference notebooks.
## Dataset Details
This unique dataset is tailored for training and deploying text-to-speech (TTS) systems specifically focused on ASMR (Autonomous Sensory Meridian Response) content. It includes a comprehensive collection of pretrained model files, audio files and training code suitable for TTS applications.
### Dataset Description
Inside this dataset, you shall find zipped folders as is follows:
1. wavs_original: original wav files as it was converted from the original video
2. wavs: original wav files broken into 1 minute chunks
3. transcripts_original: transribed scripts of the original wav files
4. transcripts: transribed scripts of the files in wav folder
5. models: text to spectrogram model trained on Glow-TTS
6. ljspeech: alignment files and respective checkpoint models (text to phoneme)
7. transformer_tts_data.ljspeech: trained checkpoint models and other files
And the following files:
1. URL: Training and inference code for GlowTTS models
2. URL: Training and inference code for Transformer models
3. VITS_TTS.ipynb: Optional code for training VITS models; follows the same format as GlowTTS
4. metadata_original.csv: ljspeech formatted transcriptions of wav_original folder; ready for TTS training
5. URL: ljspeech formatted transcriptions of wav folder; ready for TTS training
- Curated by: Alosh Denny, Anish S
- Language(s) (NLP): English
- License: MIT
### Dataset Sources
Youtube: Rebeccas ASMR, Nanou ASMR, Gibi ASMR, Cherie Lorraine ASMR, etc.
## Uses
The dataset can be used to train text2spec2mel, text2wav, and/or other end-to-end text-to-speech models.
### Direct Use
Pretrained models can be tested out with the TransformerTTS notebook and the Glow-TTS notebook.
## Dataset Card Authors
Alosh Denny, Anish S
## Dataset Card Contact
aloshdenny@URL | [
"# End-To-End TEXT-2-ASMR with Transformers\n\nThis repository contains pretrained text2asmr model files, audio files and training+inference notebooks.",
"## Dataset Details\n\nThis unique dataset is tailored for training and deploying text-to-speech (TTS) systems specifically focused on ASMR (Autonomous Sensory Meridian Response) content. It includes a comprehensive collection of pretrained model files, audio files and training code suitable for TTS applications.",
"### Dataset Description\n\n\nInside this dataset, you shall find zipped folders as is follows:\n\n1. wavs_original: original wav files as it was converted from the original video\n2. wavs: original wav files broken into 1 minute chunks\n3. transcripts_original: transribed scripts of the original wav files\n4. transcripts: transribed scripts of the files in wav folder\n5. models: text to spectrogram model trained on Glow-TTS\n6. ljspeech: alignment files and respective checkpoint models (text to phoneme)\n7. transformer_tts_data.ljspeech: trained checkpoint models and other files\n\nAnd the following files:\n\n1. URL: Training and inference code for GlowTTS models\n2. URL: Training and inference code for Transformer models\n3. VITS_TTS.ipynb: Optional code for training VITS models; follows the same format as GlowTTS\n4. metadata_original.csv: ljspeech formatted transcriptions of wav_original folder; ready for TTS training\n5. URL: ljspeech formatted transcriptions of wav folder; ready for TTS training\n\n- Curated by: Alosh Denny, Anish S\n- Language(s) (NLP): English\n- License: MIT",
"### Dataset Sources\n\nYoutube: Rebeccas ASMR, Nanou ASMR, Gibi ASMR, Cherie Lorraine ASMR, etc.",
"## Uses\n\nThe dataset can be used to train text2spec2mel, text2wav, and/or other end-to-end text-to-speech models.",
"### Direct Use\n\nPretrained models can be tested out with the TransformerTTS notebook and the Glow-TTS notebook.",
"## Dataset Card Authors\n\nAlosh Denny, Anish S",
"## Dataset Card Contact\n\naloshdenny@URL"
] | [
"TAGS\n#task_categories-text-to-speech #task_categories-text-to-audio #size_categories-1K<n<10K #language-English #license-mit #code #music #doi-10.57967/hf/1610 #region-us \n",
"# End-To-End TEXT-2-ASMR with Transformers\n\nThis repository contains pretrained text2asmr model files, audio files and training+inference notebooks.",
"## Dataset Details\n\nThis unique dataset is tailored for training and deploying text-to-speech (TTS) systems specifically focused on ASMR (Autonomous Sensory Meridian Response) content. It includes a comprehensive collection of pretrained model files, audio files and training code suitable for TTS applications.",
"### Dataset Description\n\n\nInside this dataset, you shall find zipped folders as is follows:\n\n1. wavs_original: original wav files as it was converted from the original video\n2. wavs: original wav files broken into 1 minute chunks\n3. transcripts_original: transribed scripts of the original wav files\n4. transcripts: transribed scripts of the files in wav folder\n5. models: text to spectrogram model trained on Glow-TTS\n6. ljspeech: alignment files and respective checkpoint models (text to phoneme)\n7. transformer_tts_data.ljspeech: trained checkpoint models and other files\n\nAnd the following files:\n\n1. URL: Training and inference code for GlowTTS models\n2. URL: Training and inference code for Transformer models\n3. VITS_TTS.ipynb: Optional code for training VITS models; follows the same format as GlowTTS\n4. metadata_original.csv: ljspeech formatted transcriptions of wav_original folder; ready for TTS training\n5. URL: ljspeech formatted transcriptions of wav folder; ready for TTS training\n\n- Curated by: Alosh Denny, Anish S\n- Language(s) (NLP): English\n- License: MIT",
"### Dataset Sources\n\nYoutube: Rebeccas ASMR, Nanou ASMR, Gibi ASMR, Cherie Lorraine ASMR, etc.",
"## Uses\n\nThe dataset can be used to train text2spec2mel, text2wav, and/or other end-to-end text-to-speech models.",
"### Direct Use\n\nPretrained models can be tested out with the TransformerTTS notebook and the Glow-TTS notebook.",
"## Dataset Card Authors\n\nAlosh Denny, Anish S",
"## Dataset Card Contact\n\naloshdenny@URL"
] |
76d2f604d27a27784ef087493f04faadcceb80eb |
# Dataset of yatadera_narumi/矢田寺成美 (Touhou)
This is the dataset of yatadera_narumi/矢田寺成美 (Touhou), containing 11 images and their tags.
The core tags of this character are `black_hair, braid, hat, long_hair, twin_braids, bangs, red_eyes, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 11 | 15.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yatadera_narumi_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 11 | 9.50 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yatadera_narumi_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 27 | 18.92 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yatadera_narumi_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 11 | 13.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yatadera_narumi_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 27 | 23.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/yatadera_narumi_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/yatadera_narumi_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 11 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, ajirogasa, grey_dress, long_sleeves, solo, red_capelet, buttons, looking_at_viewer, clothes_writing, smile, long_earlobes, own_hands_together, snowing, blush, open_mouth, closed_mouth, upper_body |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | ajirogasa | grey_dress | long_sleeves | solo | red_capelet | buttons | looking_at_viewer | clothes_writing | smile | long_earlobes | own_hands_together | snowing | blush | open_mouth | closed_mouth | upper_body |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:------------|:-------------|:---------------|:-------|:--------------|:----------|:--------------------|:------------------|:--------|:----------------|:---------------------|:----------|:--------|:-------------|:---------------|:-------------|
| 0 | 11 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/yatadera_narumi_touhou | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T01:04:57+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T01:09:00+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of yatadera\_narumi/矢田寺成美 (Touhou)
==========================================
This is the dataset of yatadera\_narumi/矢田寺成美 (Touhou), containing 11 images and their tags.
The core tags of this character are 'black\_hair, braid, hat, long\_hair, twin\_braids, bangs, red\_eyes, breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
be6a661c7effb976d9a8042e5d55236132049883 |
# Dataset of reisen/レイセン (Touhou)
This is the dataset of reisen/レイセン (Touhou), containing 227 images and their tags.
The core tags of this character are `animal_ears, rabbit_ears, short_hair, red_eyes, blue_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 227 | 183.03 MiB | [Download](https://huggingface.co/datasets/CyberHarem/reisen_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 227 | 125.25 MiB | [Download](https://huggingface.co/datasets/CyberHarem/reisen_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 463 | 249.41 MiB | [Download](https://huggingface.co/datasets/CyberHarem/reisen_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 227 | 167.26 MiB | [Download](https://huggingface.co/datasets/CyberHarem/reisen_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 463 | 321.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/reisen_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/reisen_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 19 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, long_sleeves, red_necktie, solo, collared_shirt, white_shirt, black_jacket, rifle, pleated_skirt, looking_at_viewer, standing, bangs, pink_skirt, blazer, holding_gun, crescent_pin, open_mouth, smile, blush, hair_between_eyes, buttons, one-hour_drawing_challenge, simple_background |
| 1 | 10 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, collared_shirt, long_sleeves, red_necktie, solo, white_shirt, blazer, pleated_skirt, looking_at_viewer, white_background, simple_background, cowboy_shot, open_mouth, rabbit_girl, rabbit_tail, black_jacket, crescent_pin, pink_skirt, bangs, closed_mouth, floppy_ears |
| 2 | 8 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, blazer, necktie, purple_hair, skirt, solo, rabbit_tail, open_mouth |
| 3 | 18 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, solo, blazer, necktie, skirt, black_thighhighs, smile, zettai_ryouiki, open_mouth |
| 4 | 7 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, solo, bat_wings, dress, looking_at_viewer, short_sleeves, smile, wrist_cuffs, mob_cap, multiple_girls, open_mouth, puffy_sleeves |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | long_sleeves | red_necktie | solo | collared_shirt | white_shirt | black_jacket | rifle | pleated_skirt | looking_at_viewer | standing | bangs | pink_skirt | blazer | holding_gun | crescent_pin | open_mouth | smile | blush | hair_between_eyes | buttons | one-hour_drawing_challenge | simple_background | white_background | cowboy_shot | rabbit_girl | rabbit_tail | closed_mouth | floppy_ears | necktie | purple_hair | skirt | black_thighhighs | zettai_ryouiki | bat_wings | dress | short_sleeves | wrist_cuffs | mob_cap | multiple_girls | puffy_sleeves |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------|:--------------|:-------|:-----------------|:--------------|:---------------|:--------|:----------------|:--------------------|:-----------|:--------|:-------------|:---------|:--------------|:---------------|:-------------|:--------|:--------|:--------------------|:----------|:-----------------------------|:--------------------|:-------------------|:--------------|:--------------|:--------------|:---------------|:--------------|:----------|:--------------|:--------|:-------------------|:-----------------|:------------|:--------|:----------------|:--------------|:----------|:-----------------|:----------------|
| 0 | 19 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | |
| 1 | 10 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | X | | X | X | | X | X | X | | X | X | | | | | | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 2 | 8 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | | | X | | | | | | | | | | X | | | X | | | | | | | | | | X | | | X | X | X | | | | | | | | | |
| 3 | 18 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | | X | | | | | | | | | | X | | | X | X | | | | | | | | | | | | X | | X | X | X | | | | | | | |
| 4 | 7 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | | | X | | | | | | X | | | | | | | X | X | | | | | | | | | | | | | | | | | X | X | X | X | X | X | X |
| CyberHarem/reisen_touhou | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T01:04:59+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T02:34:41+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of reisen/レイセン (Touhou)
===============================
This is the dataset of reisen/レイセン (Touhou), containing 227 images and their tags.
The core tags of this character are 'animal\_ears, rabbit\_ears, short\_hair, red\_eyes, blue\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
f4c88694e2f4541bd571b6130c3e354b55d9da52 |
# Dataset of teireida_mai/丁礼田舞 (Touhou)
This is the dataset of teireida_mai/丁礼田舞 (Touhou), containing 328 images and their tags.
The core tags of this character are `green_hair, green_eyes, hat, black_headwear, bow, sidelocks, bangs, yellow_bow`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:---------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 328 | 271.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/teireida_mai_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 328 | 193.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/teireida_mai_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 653 | 357.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/teireida_mai_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 328 | 252.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/teireida_mai_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 653 | 457.21 MiB | [Download](https://huggingface.co/datasets/CyberHarem/teireida_mai_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/teireida_mai_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 31 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | green_dress, short_hair_with_long_locks, 1girl, solo, waist_apron, black_socks, looking_at_viewer, tate_eboshi, full_body, green_footwear, bamboo, holding, frills, white_background, open_mouth, kneehighs, mary_janes, puffy_short_sleeves, simple_background, :d, white_apron, yellow_ribbon |
| 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, bamboo, green_dress, looking_at_viewer, open_mouth, puffy_short_sleeves, short_hair_with_long_locks, solo, tate_eboshi, waist_apron, :d, frills, holding, simple_background, white_apron, green_background |
| 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 2girls, brown_hair, frills, green_dress, puffy_short_sleeves, short_hair_with_long_locks, tate_eboshi, waist_apron, bamboo, pink_dress, holding, white_apron, grin, looking_at_viewer, solo_focus, star_(symbol) |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | green_dress | short_hair_with_long_locks | 1girl | solo | waist_apron | black_socks | looking_at_viewer | tate_eboshi | full_body | green_footwear | bamboo | holding | frills | white_background | open_mouth | kneehighs | mary_janes | puffy_short_sleeves | simple_background | :d | white_apron | yellow_ribbon | green_background | 2girls | brown_hair | pink_dress | grin | solo_focus | star_(symbol) |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------|:-----------------------------|:--------|:-------|:--------------|:--------------|:--------------------|:--------------|:------------|:-----------------|:---------|:----------|:---------|:-------------------|:-------------|:------------|:-------------|:----------------------|:--------------------|:-----|:--------------|:----------------|:-------------------|:---------|:-------------|:-------------|:-------|:-------------|:----------------|
| 0 | 31 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | |
| 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | | X | X | | | X | X | X | | X | | | X | X | X | X | | X | | | | | | |
| 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | | | X | | X | X | | | X | X | X | | | | | X | | | X | | | X | X | X | X | X | X |
| CyberHarem/teireida_mai_touhou | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T01:05:05+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T02:20:44+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of teireida\_mai/丁礼田舞 (Touhou)
======================================
This is the dataset of teireida\_mai/丁礼田舞 (Touhou), containing 328 images and their tags.
The core tags of this character are 'green\_hair, green\_eyes, hat, black\_headwear, bow, sidelocks, bangs, yellow\_bow', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
3a9df689d5b5c4e8a623719c31f19a48b2ac2399 | This dataset accompanies the following publication, please cite this publication if you use this dataset:
Fischer, T. and Milford, M., 2020. Event-Based Visual Place Recognition With Ensembles of Temporal Windows. IEEE Robotics and Automation Letters, 5(4), pp.6924-6931.
```bibtex
@article{fischer2020event,
title={Event-Based Visual Place Recognition With Ensembles of Temporal Windows},
author={Fischer, Tobias and Milford, Michael},
journal={IEEE Robotics and Automation Letters},
volume={5},
number={4},
pages={6924--6931},
year={2020}
}
```
The dataset contains five sequences of recordings. For each recording, a denoised `parquet` file is made available.
The source files for these `parquet` files can be found on [Zenodo](https://zenodo.org/records/4302805).
We also provide associated GPS information (`*.nmea`) files recorded using the consumer camera.
Please see the [associated code repository](https://github.com/Tobias-Fischer/sparse-event-vpr) for more information. | TobiasRobotics/brisbane-event-vpr | [
"license:cc-by-nc-sa-4.0",
"computer vision",
"robotics",
"event cameras",
"region:us"
] | 2024-01-15T01:11:21+00:00 | {"license": "cc-by-nc-sa-4.0", "pretty_name": "Brisbane Event VPR", "tags": ["computer vision", "robotics", "event cameras"], "arxiv": 2006.02826} | 2024-01-15T01:29:19+00:00 | [] | [] | TAGS
#license-cc-by-nc-sa-4.0 #computer vision #robotics #event cameras #region-us
| This dataset accompanies the following publication, please cite this publication if you use this dataset:
Fischer, T. and Milford, M., 2020. Event-Based Visual Place Recognition With Ensembles of Temporal Windows. IEEE Robotics and Automation Letters, 5(4), pp.6924-6931.
The dataset contains five sequences of recordings. For each recording, a denoised 'parquet' file is made available.
The source files for these 'parquet' files can be found on Zenodo.
We also provide associated GPS information ('*.nmea') files recorded using the consumer camera.
Please see the associated code repository for more information. | [] | [
"TAGS\n#license-cc-by-nc-sa-4.0 #computer vision #robotics #event cameras #region-us \n"
] |
1f74aabd0dbf73179f914e77a26b2389516955bb |
# Dataset Card for Evaluation run of NeuralNovel/Gecko-7B-v0.1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [NeuralNovel/Gecko-7B-v0.1](https://huggingface.co/NeuralNovel/Gecko-7B-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_NeuralNovel__Gecko-7B-v0.1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-16T16:13:12.225780](https://huggingface.co/datasets/open-llm-leaderboard/details_NeuralNovel__Gecko-7B-v0.1/blob/main/results_2024-01-16T16-13-12.225780.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6099096028262384,
"acc_stderr": 0.03317410149444282,
"acc_norm": 0.6143554464489048,
"acc_norm_stderr": 0.03384780111199933,
"mc1": 0.4638922888616891,
"mc1_stderr": 0.017457800422268622,
"mc2": 0.6260121840084173,
"mc2_stderr": 0.015381860069987416
},
"harness|arc:challenge|25": {
"acc": 0.5656996587030717,
"acc_stderr": 0.014484703048857359,
"acc_norm": 0.613481228668942,
"acc_norm_stderr": 0.014230084761910478
},
"harness|hellaswag|10": {
"acc": 0.6475801633140809,
"acc_stderr": 0.004767475366689761,
"acc_norm": 0.8335988846843259,
"acc_norm_stderr": 0.0037167914663914794
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5851851851851851,
"acc_stderr": 0.04256193767901408,
"acc_norm": 0.5851851851851851,
"acc_norm_stderr": 0.04256193767901408
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6052631578947368,
"acc_stderr": 0.039777499346220734,
"acc_norm": 0.6052631578947368,
"acc_norm_stderr": 0.039777499346220734
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.6,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.6,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6566037735849056,
"acc_stderr": 0.02922452646912479,
"acc_norm": 0.6566037735849056,
"acc_norm_stderr": 0.02922452646912479
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6736111111111112,
"acc_stderr": 0.03921067198982266,
"acc_norm": 0.6736111111111112,
"acc_norm_stderr": 0.03921067198982266
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.53,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.53,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.38,
"acc_stderr": 0.04878317312145632,
"acc_norm": 0.38,
"acc_norm_stderr": 0.04878317312145632
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5895953757225434,
"acc_stderr": 0.03750757044895537,
"acc_norm": 0.5895953757225434,
"acc_norm_stderr": 0.03750757044895537
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4411764705882353,
"acc_stderr": 0.049406356306056595,
"acc_norm": 0.4411764705882353,
"acc_norm_stderr": 0.049406356306056595
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.49361702127659574,
"acc_stderr": 0.032683358999363366,
"acc_norm": 0.49361702127659574,
"acc_norm_stderr": 0.032683358999363366
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.45614035087719296,
"acc_stderr": 0.04685473041907789,
"acc_norm": 0.45614035087719296,
"acc_norm_stderr": 0.04685473041907789
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.593103448275862,
"acc_stderr": 0.04093793981266236,
"acc_norm": 0.593103448275862,
"acc_norm_stderr": 0.04093793981266236
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.37037037037037035,
"acc_stderr": 0.024870815251057093,
"acc_norm": 0.37037037037037035,
"acc_norm_stderr": 0.024870815251057093
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3968253968253968,
"acc_stderr": 0.043758884927270605,
"acc_norm": 0.3968253968253968,
"acc_norm_stderr": 0.043758884927270605
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.38,
"acc_stderr": 0.04878317312145632,
"acc_norm": 0.38,
"acc_norm_stderr": 0.04878317312145632
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6612903225806451,
"acc_stderr": 0.026923446059302844,
"acc_norm": 0.6612903225806451,
"acc_norm_stderr": 0.026923446059302844
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.47783251231527096,
"acc_stderr": 0.03514528562175007,
"acc_norm": 0.47783251231527096,
"acc_norm_stderr": 0.03514528562175007
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7515151515151515,
"acc_stderr": 0.03374402644139404,
"acc_norm": 0.7515151515151515,
"acc_norm_stderr": 0.03374402644139404
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7727272727272727,
"acc_stderr": 0.029857515673386417,
"acc_norm": 0.7727272727272727,
"acc_norm_stderr": 0.029857515673386417
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8652849740932642,
"acc_stderr": 0.024639789097709437,
"acc_norm": 0.8652849740932642,
"acc_norm_stderr": 0.024639789097709437
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5923076923076923,
"acc_stderr": 0.024915243985987847,
"acc_norm": 0.5923076923076923,
"acc_norm_stderr": 0.024915243985987847
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.337037037037037,
"acc_stderr": 0.028820884666253255,
"acc_norm": 0.337037037037037,
"acc_norm_stderr": 0.028820884666253255
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6134453781512605,
"acc_stderr": 0.03163145807552378,
"acc_norm": 0.6134453781512605,
"acc_norm_stderr": 0.03163145807552378
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.39072847682119205,
"acc_stderr": 0.03983798306659806,
"acc_norm": 0.39072847682119205,
"acc_norm_stderr": 0.03983798306659806
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7944954128440367,
"acc_stderr": 0.01732435232501601,
"acc_norm": 0.7944954128440367,
"acc_norm_stderr": 0.01732435232501601
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.46296296296296297,
"acc_stderr": 0.03400603625538271,
"acc_norm": 0.46296296296296297,
"acc_norm_stderr": 0.03400603625538271
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7598039215686274,
"acc_stderr": 0.02998373305591361,
"acc_norm": 0.7598039215686274,
"acc_norm_stderr": 0.02998373305591361
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7637130801687764,
"acc_stderr": 0.027652153144159263,
"acc_norm": 0.7637130801687764,
"acc_norm_stderr": 0.027652153144159263
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6457399103139013,
"acc_stderr": 0.03210062154134987,
"acc_norm": 0.6457399103139013,
"acc_norm_stderr": 0.03210062154134987
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7251908396946565,
"acc_stderr": 0.039153454088478354,
"acc_norm": 0.7251908396946565,
"acc_norm_stderr": 0.039153454088478354
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8099173553719008,
"acc_stderr": 0.03581796951709282,
"acc_norm": 0.8099173553719008,
"acc_norm_stderr": 0.03581796951709282
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.6851851851851852,
"acc_stderr": 0.04489931073591312,
"acc_norm": 0.6851851851851852,
"acc_norm_stderr": 0.04489931073591312
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7177914110429447,
"acc_stderr": 0.03536117886664742,
"acc_norm": 0.7177914110429447,
"acc_norm_stderr": 0.03536117886664742
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.45535714285714285,
"acc_stderr": 0.04726835553719099,
"acc_norm": 0.45535714285714285,
"acc_norm_stderr": 0.04726835553719099
},
"harness|hendrycksTest-management|5": {
"acc": 0.7378640776699029,
"acc_stderr": 0.04354631077260595,
"acc_norm": 0.7378640776699029,
"acc_norm_stderr": 0.04354631077260595
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8760683760683761,
"acc_stderr": 0.021586494001281348,
"acc_norm": 0.8760683760683761,
"acc_norm_stderr": 0.021586494001281348
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.68,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.68,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7867177522349936,
"acc_stderr": 0.014648172749593517,
"acc_norm": 0.7867177522349936,
"acc_norm_stderr": 0.014648172749593517
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6878612716763006,
"acc_stderr": 0.024946792225272314,
"acc_norm": 0.6878612716763006,
"acc_norm_stderr": 0.024946792225272314
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.35083798882681566,
"acc_stderr": 0.015961036675230963,
"acc_norm": 0.35083798882681566,
"acc_norm_stderr": 0.015961036675230963
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7189542483660131,
"acc_stderr": 0.025738854797818733,
"acc_norm": 0.7189542483660131,
"acc_norm_stderr": 0.025738854797818733
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.662379421221865,
"acc_stderr": 0.026858825879488544,
"acc_norm": 0.662379421221865,
"acc_norm_stderr": 0.026858825879488544
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6728395061728395,
"acc_stderr": 0.026105673861409825,
"acc_norm": 0.6728395061728395,
"acc_norm_stderr": 0.026105673861409825
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.46099290780141844,
"acc_stderr": 0.02973659252642444,
"acc_norm": 0.46099290780141844,
"acc_norm_stderr": 0.02973659252642444
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.43089960886571055,
"acc_stderr": 0.012647695889547235,
"acc_norm": 0.43089960886571055,
"acc_norm_stderr": 0.012647695889547235
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6176470588235294,
"acc_stderr": 0.02952009569768776,
"acc_norm": 0.6176470588235294,
"acc_norm_stderr": 0.02952009569768776
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6225490196078431,
"acc_stderr": 0.019610851474880283,
"acc_norm": 0.6225490196078431,
"acc_norm_stderr": 0.019610851474880283
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7,
"acc_stderr": 0.04389311454644287,
"acc_norm": 0.7,
"acc_norm_stderr": 0.04389311454644287
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6979591836734694,
"acc_stderr": 0.029393609319879804,
"acc_norm": 0.6979591836734694,
"acc_norm_stderr": 0.029393609319879804
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7860696517412935,
"acc_stderr": 0.02899690969332891,
"acc_norm": 0.7860696517412935,
"acc_norm_stderr": 0.02899690969332891
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.83,
"acc_stderr": 0.03775251680686371,
"acc_norm": 0.83,
"acc_norm_stderr": 0.03775251680686371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5060240963855421,
"acc_stderr": 0.03892212195333045,
"acc_norm": 0.5060240963855421,
"acc_norm_stderr": 0.03892212195333045
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.02878210810540171,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.02878210810540171
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4638922888616891,
"mc1_stderr": 0.017457800422268622,
"mc2": 0.6260121840084173,
"mc2_stderr": 0.015381860069987416
},
"harness|winogrande|5": {
"acc": 0.7758484609313339,
"acc_stderr": 0.011720400740774094
},
"harness|gsm8k|5": {
"acc": 0.41546626231993933,
"acc_stderr": 0.013574222625031811
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_NeuralNovel__Gecko-7B-v0.1 | [
"region:us"
] | 2024-01-15T01:39:43+00:00 | {"pretty_name": "Evaluation run of NeuralNovel/Gecko-7B-v0.1", "dataset_summary": "Dataset automatically created during the evaluation run of model [NeuralNovel/Gecko-7B-v0.1](https://huggingface.co/NeuralNovel/Gecko-7B-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_NeuralNovel__Gecko-7B-v0.1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-16T16:13:12.225780](https://huggingface.co/datasets/open-llm-leaderboard/details_NeuralNovel__Gecko-7B-v0.1/blob/main/results_2024-01-16T16-13-12.225780.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6099096028262384,\n \"acc_stderr\": 0.03317410149444282,\n \"acc_norm\": 0.6143554464489048,\n \"acc_norm_stderr\": 0.03384780111199933,\n \"mc1\": 0.4638922888616891,\n \"mc1_stderr\": 0.017457800422268622,\n \"mc2\": 0.6260121840084173,\n \"mc2_stderr\": 0.015381860069987416\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5656996587030717,\n \"acc_stderr\": 0.014484703048857359,\n \"acc_norm\": 0.613481228668942,\n \"acc_norm_stderr\": 0.014230084761910478\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6475801633140809,\n \"acc_stderr\": 0.004767475366689761,\n \"acc_norm\": 0.8335988846843259,\n \"acc_norm_stderr\": 0.0037167914663914794\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5851851851851851,\n \"acc_stderr\": 0.04256193767901408,\n \"acc_norm\": 0.5851851851851851,\n \"acc_norm_stderr\": 0.04256193767901408\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6052631578947368,\n \"acc_stderr\": 0.039777499346220734,\n \"acc_norm\": 0.6052631578947368,\n \"acc_norm_stderr\": 0.039777499346220734\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6566037735849056,\n \"acc_stderr\": 0.02922452646912479,\n \"acc_norm\": 0.6566037735849056,\n \"acc_norm_stderr\": 0.02922452646912479\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6736111111111112,\n \"acc_stderr\": 0.03921067198982266,\n \"acc_norm\": 0.6736111111111112,\n \"acc_norm_stderr\": 0.03921067198982266\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.53,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5895953757225434,\n \"acc_stderr\": 0.03750757044895537,\n \"acc_norm\": 0.5895953757225434,\n \"acc_norm_stderr\": 0.03750757044895537\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.049406356306056595,\n \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.049406356306056595\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.49361702127659574,\n \"acc_stderr\": 0.032683358999363366,\n \"acc_norm\": 0.49361702127659574,\n \"acc_norm_stderr\": 0.032683358999363366\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.45614035087719296,\n \"acc_stderr\": 0.04685473041907789,\n \"acc_norm\": 0.45614035087719296,\n \"acc_norm_stderr\": 0.04685473041907789\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.593103448275862,\n \"acc_stderr\": 0.04093793981266236,\n \"acc_norm\": 0.593103448275862,\n \"acc_norm_stderr\": 0.04093793981266236\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.37037037037037035,\n \"acc_stderr\": 0.024870815251057093,\n \"acc_norm\": 0.37037037037037035,\n \"acc_norm_stderr\": 0.024870815251057093\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3968253968253968,\n \"acc_stderr\": 0.043758884927270605,\n \"acc_norm\": 0.3968253968253968,\n \"acc_norm_stderr\": 0.043758884927270605\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6612903225806451,\n \"acc_stderr\": 0.026923446059302844,\n \"acc_norm\": 0.6612903225806451,\n \"acc_norm_stderr\": 0.026923446059302844\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.47783251231527096,\n \"acc_stderr\": 0.03514528562175007,\n \"acc_norm\": 0.47783251231527096,\n \"acc_norm_stderr\": 0.03514528562175007\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7515151515151515,\n \"acc_stderr\": 0.03374402644139404,\n \"acc_norm\": 0.7515151515151515,\n \"acc_norm_stderr\": 0.03374402644139404\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7727272727272727,\n \"acc_stderr\": 0.029857515673386417,\n \"acc_norm\": 0.7727272727272727,\n \"acc_norm_stderr\": 0.029857515673386417\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8652849740932642,\n \"acc_stderr\": 0.024639789097709437,\n \"acc_norm\": 0.8652849740932642,\n \"acc_norm_stderr\": 0.024639789097709437\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5923076923076923,\n \"acc_stderr\": 0.024915243985987847,\n \"acc_norm\": 0.5923076923076923,\n \"acc_norm_stderr\": 0.024915243985987847\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.337037037037037,\n \"acc_stderr\": 0.028820884666253255,\n \"acc_norm\": 0.337037037037037,\n \"acc_norm_stderr\": 0.028820884666253255\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6134453781512605,\n \"acc_stderr\": 0.03163145807552378,\n \"acc_norm\": 0.6134453781512605,\n \"acc_norm_stderr\": 0.03163145807552378\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.39072847682119205,\n \"acc_stderr\": 0.03983798306659806,\n \"acc_norm\": 0.39072847682119205,\n \"acc_norm_stderr\": 0.03983798306659806\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7944954128440367,\n \"acc_stderr\": 0.01732435232501601,\n \"acc_norm\": 0.7944954128440367,\n \"acc_norm_stderr\": 0.01732435232501601\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.46296296296296297,\n \"acc_stderr\": 0.03400603625538271,\n \"acc_norm\": 0.46296296296296297,\n \"acc_norm_stderr\": 0.03400603625538271\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7598039215686274,\n \"acc_stderr\": 0.02998373305591361,\n \"acc_norm\": 0.7598039215686274,\n \"acc_norm_stderr\": 0.02998373305591361\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7637130801687764,\n \"acc_stderr\": 0.027652153144159263,\n \"acc_norm\": 0.7637130801687764,\n \"acc_norm_stderr\": 0.027652153144159263\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6457399103139013,\n \"acc_stderr\": 0.03210062154134987,\n \"acc_norm\": 0.6457399103139013,\n \"acc_norm_stderr\": 0.03210062154134987\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7251908396946565,\n \"acc_stderr\": 0.039153454088478354,\n \"acc_norm\": 0.7251908396946565,\n \"acc_norm_stderr\": 0.039153454088478354\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8099173553719008,\n \"acc_stderr\": 0.03581796951709282,\n \"acc_norm\": 0.8099173553719008,\n \"acc_norm_stderr\": 0.03581796951709282\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6851851851851852,\n \"acc_stderr\": 0.04489931073591312,\n \"acc_norm\": 0.6851851851851852,\n \"acc_norm_stderr\": 0.04489931073591312\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7177914110429447,\n \"acc_stderr\": 0.03536117886664742,\n \"acc_norm\": 0.7177914110429447,\n \"acc_norm_stderr\": 0.03536117886664742\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n \"acc_stderr\": 0.04726835553719099,\n \"acc_norm\": 0.45535714285714285,\n \"acc_norm_stderr\": 0.04726835553719099\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7378640776699029,\n \"acc_stderr\": 0.04354631077260595,\n \"acc_norm\": 0.7378640776699029,\n \"acc_norm_stderr\": 0.04354631077260595\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8760683760683761,\n \"acc_stderr\": 0.021586494001281348,\n \"acc_norm\": 0.8760683760683761,\n \"acc_norm_stderr\": 0.021586494001281348\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7867177522349936,\n \"acc_stderr\": 0.014648172749593517,\n \"acc_norm\": 0.7867177522349936,\n \"acc_norm_stderr\": 0.014648172749593517\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6878612716763006,\n \"acc_stderr\": 0.024946792225272314,\n \"acc_norm\": 0.6878612716763006,\n \"acc_norm_stderr\": 0.024946792225272314\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.35083798882681566,\n \"acc_stderr\": 0.015961036675230963,\n \"acc_norm\": 0.35083798882681566,\n \"acc_norm_stderr\": 0.015961036675230963\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7189542483660131,\n \"acc_stderr\": 0.025738854797818733,\n \"acc_norm\": 0.7189542483660131,\n \"acc_norm_stderr\": 0.025738854797818733\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.662379421221865,\n \"acc_stderr\": 0.026858825879488544,\n \"acc_norm\": 0.662379421221865,\n \"acc_norm_stderr\": 0.026858825879488544\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6728395061728395,\n \"acc_stderr\": 0.026105673861409825,\n \"acc_norm\": 0.6728395061728395,\n \"acc_norm_stderr\": 0.026105673861409825\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.46099290780141844,\n \"acc_stderr\": 0.02973659252642444,\n \"acc_norm\": 0.46099290780141844,\n \"acc_norm_stderr\": 0.02973659252642444\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.43089960886571055,\n \"acc_stderr\": 0.012647695889547235,\n \"acc_norm\": 0.43089960886571055,\n \"acc_norm_stderr\": 0.012647695889547235\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6176470588235294,\n \"acc_stderr\": 0.02952009569768776,\n \"acc_norm\": 0.6176470588235294,\n \"acc_norm_stderr\": 0.02952009569768776\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6225490196078431,\n \"acc_stderr\": 0.019610851474880283,\n \"acc_norm\": 0.6225490196078431,\n \"acc_norm_stderr\": 0.019610851474880283\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.04389311454644287,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.04389311454644287\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6979591836734694,\n \"acc_stderr\": 0.029393609319879804,\n \"acc_norm\": 0.6979591836734694,\n \"acc_norm_stderr\": 0.029393609319879804\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7860696517412935,\n \"acc_stderr\": 0.02899690969332891,\n \"acc_norm\": 0.7860696517412935,\n \"acc_norm_stderr\": 0.02899690969332891\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.83,\n \"acc_stderr\": 0.03775251680686371,\n \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.03775251680686371\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5060240963855421,\n \"acc_stderr\": 0.03892212195333045,\n \"acc_norm\": 0.5060240963855421,\n \"acc_norm_stderr\": 0.03892212195333045\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4638922888616891,\n \"mc1_stderr\": 0.017457800422268622,\n \"mc2\": 0.6260121840084173,\n \"mc2_stderr\": 0.015381860069987416\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7758484609313339,\n \"acc_stderr\": 0.011720400740774094\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.41546626231993933,\n \"acc_stderr\": 0.013574222625031811\n }\n}\n```", "repo_url": "https://huggingface.co/NeuralNovel/Gecko-7B-v0.1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|arc:challenge|25_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|arc:challenge|25_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|arc:challenge|25_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|gsm8k|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|gsm8k|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|gsm8k|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hellaswag|10_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hellaswag|10_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hellaswag|10_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T01-37-25.127753.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-16T02-41-01.393804.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-16T16-13-12.225780.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["**/details_harness|winogrande|5_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["**/details_harness|winogrande|5_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["**/details_harness|winogrande|5_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-16T16-13-12.225780.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_15T01_37_25.127753", "path": ["results_2024-01-15T01-37-25.127753.parquet"]}, {"split": "2024_01_16T02_41_01.393804", "path": ["results_2024-01-16T02-41-01.393804.parquet"]}, {"split": "2024_01_16T16_13_12.225780", "path": ["results_2024-01-16T16-13-12.225780.parquet"]}, {"split": "latest", "path": ["results_2024-01-16T16-13-12.225780.parquet"]}]}]} | 2024-01-16T16:15:30+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of NeuralNovel/Gecko-7B-v0.1
Dataset automatically created during the evaluation run of model NeuralNovel/Gecko-7B-v0.1 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-16T16:13:12.225780(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of NeuralNovel/Gecko-7B-v0.1\n\n\n\nDataset automatically created during the evaluation run of model NeuralNovel/Gecko-7B-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-16T16:13:12.225780(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of NeuralNovel/Gecko-7B-v0.1\n\n\n\nDataset automatically created during the evaluation run of model NeuralNovel/Gecko-7B-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-16T16:13:12.225780(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
999e51ff914ac831ea63560db9e797278b44a8a7 |
This is a dataset with explanations from ChatGPT for the correct and incorrect answers in CommonsenseQA. The explanations are generated by prompting ChatGPT with answer keys and in-context examples. We expect this dataset to be an useful source for understanding the commonsense reasoning ability of LLMs or training other LMs. | KomeijiForce/CommonsenseQA-Explained-by-ChatGPT | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | 2024-01-15T02:13:59+00:00 | {"language": ["en"], "size_categories": ["10K<n<100K"], "task_categories": ["question-answering"]} | 2024-01-15T02:19:22+00:00 | [] | [
"en"
] | TAGS
#task_categories-question-answering #size_categories-10K<n<100K #language-English #region-us
|
This is a dataset with explanations from ChatGPT for the correct and incorrect answers in CommonsenseQA. The explanations are generated by prompting ChatGPT with answer keys and in-context examples. We expect this dataset to be an useful source for understanding the commonsense reasoning ability of LLMs or training other LMs. | [] | [
"TAGS\n#task_categories-question-answering #size_categories-10K<n<100K #language-English #region-us \n"
] |
4191a2e3641c8e0894850568d1f5ee8b8f3ba7f9 |
This is a dataset with explanations from ChatGPT for the correct and incorrect answers in ARC-Easy. The explanations are generated by prompting ChatGPT with answer keys and in-context examples. We expect this dataset to be an useful source for understanding the commonsense reasoning ability of LLMs or training other LMs. | KomeijiForce/ARC-Easy-Explained-by-ChatGPT | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | 2024-01-15T02:22:18+00:00 | {"language": ["en"], "size_categories": ["1K<n<10K"], "task_categories": ["question-answering"]} | 2024-01-15T02:27:45+00:00 | [] | [
"en"
] | TAGS
#task_categories-question-answering #size_categories-1K<n<10K #language-English #region-us
|
This is a dataset with explanations from ChatGPT for the correct and incorrect answers in ARC-Easy. The explanations are generated by prompting ChatGPT with answer keys and in-context examples. We expect this dataset to be an useful source for understanding the commonsense reasoning ability of LLMs or training other LMs. | [] | [
"TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-English #region-us \n"
] |
d90607dfedcf8e2a2cb562e75e5cd0f001bea8e2 |
This is a dataset with explanations from ChatGPT for the correct and incorrect answers in ARC Challenge. The explanations are generated by prompting ChatGPT with answer keys and in-context examples. We expect this dataset to be an useful source for understanding the commonsense reasoning ability of LLMs or training other LMs. | KomeijiForce/ARC-Challenge-Explained-by-ChatGPT | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:en",
"region:us"
] | 2024-01-15T02:28:36+00:00 | {"language": ["en"], "size_categories": ["1K<n<10K"], "task_categories": ["question-answering"]} | 2024-01-15T02:31:28+00:00 | [] | [
"en"
] | TAGS
#task_categories-question-answering #size_categories-1K<n<10K #language-English #region-us
|
This is a dataset with explanations from ChatGPT for the correct and incorrect answers in ARC Challenge. The explanations are generated by prompting ChatGPT with answer keys and in-context examples. We expect this dataset to be an useful source for understanding the commonsense reasoning ability of LLMs or training other LMs. | [] | [
"TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-English #region-us \n"
] |
018d25f58895cf7acbb8698650d00db927a0a92c |
# Dataset Card for Evaluation run of rombodawg/Everyone-Coder-4x7b-Base
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [rombodawg/Everyone-Coder-4x7b-Base](https://huggingface.co/rombodawg/Everyone-Coder-4x7b-Base) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_rombodawg__Everyone-Coder-4x7b-Base",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-15T17:47:56.627468](https://huggingface.co/datasets/open-llm-leaderboard/details_rombodawg__Everyone-Coder-4x7b-Base/blob/main/results_2024-01-15T17-47-56.627468.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6447898132540958,
"acc_stderr": 0.031915985387073305,
"acc_norm": 0.6461876134084575,
"acc_norm_stderr": 0.03255592718009434,
"mc1": 0.3390452876376989,
"mc1_stderr": 0.016571797910626615,
"mc2": 0.49160643723765735,
"mc2_stderr": 0.015188709391608397
},
"harness|arc:challenge|25": {
"acc": 0.6117747440273038,
"acc_stderr": 0.014241614207414046,
"acc_norm": 0.6450511945392492,
"acc_norm_stderr": 0.013983036904094087
},
"harness|hellaswag|10": {
"acc": 0.6623182632941645,
"acc_stderr": 0.004719529099913131,
"acc_norm": 0.8481378211511651,
"acc_norm_stderr": 0.0035815378475817965
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6222222222222222,
"acc_stderr": 0.04188307537595852,
"acc_norm": 0.6222222222222222,
"acc_norm_stderr": 0.04188307537595852
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6907894736842105,
"acc_stderr": 0.037610708698674805,
"acc_norm": 0.6907894736842105,
"acc_norm_stderr": 0.037610708698674805
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.62,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.62,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6981132075471698,
"acc_stderr": 0.02825420034443866,
"acc_norm": 0.6981132075471698,
"acc_norm_stderr": 0.02825420034443866
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7638888888888888,
"acc_stderr": 0.03551446610810826,
"acc_norm": 0.7638888888888888,
"acc_norm_stderr": 0.03551446610810826
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956911,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956911
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6358381502890174,
"acc_stderr": 0.03669072477416907,
"acc_norm": 0.6358381502890174,
"acc_norm_stderr": 0.03669072477416907
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.39215686274509803,
"acc_stderr": 0.04858083574266345,
"acc_norm": 0.39215686274509803,
"acc_norm_stderr": 0.04858083574266345
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932263,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932263
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5531914893617021,
"acc_stderr": 0.0325005368436584,
"acc_norm": 0.5531914893617021,
"acc_norm_stderr": 0.0325005368436584
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4473684210526316,
"acc_stderr": 0.04677473004491199,
"acc_norm": 0.4473684210526316,
"acc_norm_stderr": 0.04677473004491199
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5517241379310345,
"acc_stderr": 0.04144311810878152,
"acc_norm": 0.5517241379310345,
"acc_norm_stderr": 0.04144311810878152
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.41534391534391535,
"acc_stderr": 0.025379524910778408,
"acc_norm": 0.41534391534391535,
"acc_norm_stderr": 0.025379524910778408
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3888888888888889,
"acc_stderr": 0.0436031486007746,
"acc_norm": 0.3888888888888889,
"acc_norm_stderr": 0.0436031486007746
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.29,
"acc_stderr": 0.04560480215720684,
"acc_norm": 0.29,
"acc_norm_stderr": 0.04560480215720684
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7580645161290323,
"acc_stderr": 0.024362599693031096,
"acc_norm": 0.7580645161290323,
"acc_norm_stderr": 0.024362599693031096
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.035158955511656986,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.035158955511656986
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.793939393939394,
"acc_stderr": 0.031584153240477114,
"acc_norm": 0.793939393939394,
"acc_norm_stderr": 0.031584153240477114
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7828282828282829,
"acc_stderr": 0.02937661648494563,
"acc_norm": 0.7828282828282829,
"acc_norm_stderr": 0.02937661648494563
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9015544041450777,
"acc_stderr": 0.02150024957603346,
"acc_norm": 0.9015544041450777,
"acc_norm_stderr": 0.02150024957603346
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6461538461538462,
"acc_stderr": 0.024243783994062153,
"acc_norm": 0.6461538461538462,
"acc_norm_stderr": 0.024243783994062153
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.36666666666666664,
"acc_stderr": 0.029381620726465076,
"acc_norm": 0.36666666666666664,
"acc_norm_stderr": 0.029381620726465076
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.634453781512605,
"acc_stderr": 0.031282177063684614,
"acc_norm": 0.634453781512605,
"acc_norm_stderr": 0.031282177063684614
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.31125827814569534,
"acc_stderr": 0.03780445850526732,
"acc_norm": 0.31125827814569534,
"acc_norm_stderr": 0.03780445850526732
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8201834862385321,
"acc_stderr": 0.01646534546739154,
"acc_norm": 0.8201834862385321,
"acc_norm_stderr": 0.01646534546739154
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5185185185185185,
"acc_stderr": 0.03407632093854051,
"acc_norm": 0.5185185185185185,
"acc_norm_stderr": 0.03407632093854051
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7794117647058824,
"acc_stderr": 0.02910225438967407,
"acc_norm": 0.7794117647058824,
"acc_norm_stderr": 0.02910225438967407
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7974683544303798,
"acc_stderr": 0.026160568246601446,
"acc_norm": 0.7974683544303798,
"acc_norm_stderr": 0.026160568246601446
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6905829596412556,
"acc_stderr": 0.03102441174057222,
"acc_norm": 0.6905829596412556,
"acc_norm_stderr": 0.03102441174057222
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7938931297709924,
"acc_stderr": 0.03547771004159463,
"acc_norm": 0.7938931297709924,
"acc_norm_stderr": 0.03547771004159463
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8181818181818182,
"acc_stderr": 0.03520893951097652,
"acc_norm": 0.8181818181818182,
"acc_norm_stderr": 0.03520893951097652
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8055555555555556,
"acc_stderr": 0.038260763248848646,
"acc_norm": 0.8055555555555556,
"acc_norm_stderr": 0.038260763248848646
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7975460122699386,
"acc_stderr": 0.031570650789119005,
"acc_norm": 0.7975460122699386,
"acc_norm_stderr": 0.031570650789119005
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5,
"acc_stderr": 0.04745789978762494,
"acc_norm": 0.5,
"acc_norm_stderr": 0.04745789978762494
},
"harness|hendrycksTest-management|5": {
"acc": 0.8349514563106796,
"acc_stderr": 0.036756688322331886,
"acc_norm": 0.8349514563106796,
"acc_norm_stderr": 0.036756688322331886
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8760683760683761,
"acc_stderr": 0.021586494001281382,
"acc_norm": 0.8760683760683761,
"acc_norm_stderr": 0.021586494001281382
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8212005108556832,
"acc_stderr": 0.013702643715368983,
"acc_norm": 0.8212005108556832,
"acc_norm_stderr": 0.013702643715368983
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7398843930635838,
"acc_stderr": 0.023618678310069356,
"acc_norm": 0.7398843930635838,
"acc_norm_stderr": 0.023618678310069356
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3318435754189944,
"acc_stderr": 0.015748421208187303,
"acc_norm": 0.3318435754189944,
"acc_norm_stderr": 0.015748421208187303
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7352941176470589,
"acc_stderr": 0.02526169121972948,
"acc_norm": 0.7352941176470589,
"acc_norm_stderr": 0.02526169121972948
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.707395498392283,
"acc_stderr": 0.025839898334877983,
"acc_norm": 0.707395498392283,
"acc_norm_stderr": 0.025839898334877983
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7561728395061729,
"acc_stderr": 0.023891879541959603,
"acc_norm": 0.7561728395061729,
"acc_norm_stderr": 0.023891879541959603
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4858156028368794,
"acc_stderr": 0.02981549448368206,
"acc_norm": 0.4858156028368794,
"acc_norm_stderr": 0.02981549448368206
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.45827900912646674,
"acc_stderr": 0.01272570165695364,
"acc_norm": 0.45827900912646674,
"acc_norm_stderr": 0.01272570165695364
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6801470588235294,
"acc_stderr": 0.02833295951403121,
"acc_norm": 0.6801470588235294,
"acc_norm_stderr": 0.02833295951403121
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.673202614379085,
"acc_stderr": 0.01897542792050721,
"acc_norm": 0.673202614379085,
"acc_norm_stderr": 0.01897542792050721
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6454545454545455,
"acc_stderr": 0.045820048415054174,
"acc_norm": 0.6454545454545455,
"acc_norm_stderr": 0.045820048415054174
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7551020408163265,
"acc_stderr": 0.027529637440174923,
"acc_norm": 0.7551020408163265,
"acc_norm_stderr": 0.027529637440174923
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8258706467661692,
"acc_stderr": 0.026814951200421603,
"acc_norm": 0.8258706467661692,
"acc_norm_stderr": 0.026814951200421603
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.91,
"acc_stderr": 0.028762349126466136,
"acc_norm": 0.91,
"acc_norm_stderr": 0.028762349126466136
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.03869543323472101,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.03869543323472101
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.847953216374269,
"acc_stderr": 0.027539122889061456,
"acc_norm": 0.847953216374269,
"acc_norm_stderr": 0.027539122889061456
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3390452876376989,
"mc1_stderr": 0.016571797910626615,
"mc2": 0.49160643723765735,
"mc2_stderr": 0.015188709391608397
},
"harness|winogrande|5": {
"acc": 0.7916337805840569,
"acc_stderr": 0.011414554399987729
},
"harness|gsm8k|5": {
"acc": 0.6345716451857468,
"acc_stderr": 0.013264282030266635
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_rombodawg__Everyone-Coder-4x7b-Base | [
"region:us"
] | 2024-01-15T02:39:40+00:00 | {"pretty_name": "Evaluation run of rombodawg/Everyone-Coder-4x7b-Base", "dataset_summary": "Dataset automatically created during the evaluation run of model [rombodawg/Everyone-Coder-4x7b-Base](https://huggingface.co/rombodawg/Everyone-Coder-4x7b-Base) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_rombodawg__Everyone-Coder-4x7b-Base\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-15T17:47:56.627468](https://huggingface.co/datasets/open-llm-leaderboard/details_rombodawg__Everyone-Coder-4x7b-Base/blob/main/results_2024-01-15T17-47-56.627468.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6447898132540958,\n \"acc_stderr\": 0.031915985387073305,\n \"acc_norm\": 0.6461876134084575,\n \"acc_norm_stderr\": 0.03255592718009434,\n \"mc1\": 0.3390452876376989,\n \"mc1_stderr\": 0.016571797910626615,\n \"mc2\": 0.49160643723765735,\n \"mc2_stderr\": 0.015188709391608397\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6117747440273038,\n \"acc_stderr\": 0.014241614207414046,\n \"acc_norm\": 0.6450511945392492,\n \"acc_norm_stderr\": 0.013983036904094087\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6623182632941645,\n \"acc_stderr\": 0.004719529099913131,\n \"acc_norm\": 0.8481378211511651,\n \"acc_norm_stderr\": 0.0035815378475817965\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6222222222222222,\n \"acc_stderr\": 0.04188307537595852,\n \"acc_norm\": 0.6222222222222222,\n \"acc_norm_stderr\": 0.04188307537595852\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6907894736842105,\n \"acc_stderr\": 0.037610708698674805,\n \"acc_norm\": 0.6907894736842105,\n \"acc_norm_stderr\": 0.037610708698674805\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6981132075471698,\n \"acc_stderr\": 0.02825420034443866,\n \"acc_norm\": 0.6981132075471698,\n \"acc_norm_stderr\": 0.02825420034443866\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7638888888888888,\n \"acc_stderr\": 0.03551446610810826,\n \"acc_norm\": 0.7638888888888888,\n \"acc_norm_stderr\": 0.03551446610810826\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956911,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956911\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6358381502890174,\n \"acc_stderr\": 0.03669072477416907,\n \"acc_norm\": 0.6358381502890174,\n \"acc_norm_stderr\": 0.03669072477416907\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.39215686274509803,\n \"acc_stderr\": 0.04858083574266345,\n \"acc_norm\": 0.39215686274509803,\n \"acc_norm_stderr\": 0.04858083574266345\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932263,\n \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932263\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5531914893617021,\n \"acc_stderr\": 0.0325005368436584,\n \"acc_norm\": 0.5531914893617021,\n \"acc_norm_stderr\": 0.0325005368436584\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4473684210526316,\n \"acc_stderr\": 0.04677473004491199,\n \"acc_norm\": 0.4473684210526316,\n \"acc_norm_stderr\": 0.04677473004491199\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5517241379310345,\n \"acc_stderr\": 0.04144311810878152,\n \"acc_norm\": 0.5517241379310345,\n \"acc_norm_stderr\": 0.04144311810878152\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.41534391534391535,\n \"acc_stderr\": 0.025379524910778408,\n \"acc_norm\": 0.41534391534391535,\n \"acc_norm_stderr\": 0.025379524910778408\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3888888888888889,\n \"acc_stderr\": 0.0436031486007746,\n \"acc_norm\": 0.3888888888888889,\n \"acc_norm_stderr\": 0.0436031486007746\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.04560480215720684,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.04560480215720684\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7580645161290323,\n \"acc_stderr\": 0.024362599693031096,\n \"acc_norm\": 0.7580645161290323,\n \"acc_norm_stderr\": 0.024362599693031096\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.035158955511656986,\n \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.035158955511656986\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.793939393939394,\n \"acc_stderr\": 0.031584153240477114,\n \"acc_norm\": 0.793939393939394,\n \"acc_norm_stderr\": 0.031584153240477114\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7828282828282829,\n \"acc_stderr\": 0.02937661648494563,\n \"acc_norm\": 0.7828282828282829,\n \"acc_norm_stderr\": 0.02937661648494563\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.02150024957603346,\n \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.02150024957603346\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6461538461538462,\n \"acc_stderr\": 0.024243783994062153,\n \"acc_norm\": 0.6461538461538462,\n \"acc_norm_stderr\": 0.024243783994062153\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.36666666666666664,\n \"acc_stderr\": 0.029381620726465076,\n \"acc_norm\": 0.36666666666666664,\n \"acc_norm_stderr\": 0.029381620726465076\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.634453781512605,\n \"acc_stderr\": 0.031282177063684614,\n \"acc_norm\": 0.634453781512605,\n \"acc_norm_stderr\": 0.031282177063684614\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.31125827814569534,\n \"acc_stderr\": 0.03780445850526732,\n \"acc_norm\": 0.31125827814569534,\n \"acc_norm_stderr\": 0.03780445850526732\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8201834862385321,\n \"acc_stderr\": 0.01646534546739154,\n \"acc_norm\": 0.8201834862385321,\n \"acc_norm_stderr\": 0.01646534546739154\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5185185185185185,\n \"acc_stderr\": 0.03407632093854051,\n \"acc_norm\": 0.5185185185185185,\n \"acc_norm_stderr\": 0.03407632093854051\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7794117647058824,\n \"acc_stderr\": 0.02910225438967407,\n \"acc_norm\": 0.7794117647058824,\n \"acc_norm_stderr\": 0.02910225438967407\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7974683544303798,\n \"acc_stderr\": 0.026160568246601446,\n \"acc_norm\": 0.7974683544303798,\n \"acc_norm_stderr\": 0.026160568246601446\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n \"acc_stderr\": 0.03102441174057222,\n \"acc_norm\": 0.6905829596412556,\n \"acc_norm_stderr\": 0.03102441174057222\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7938931297709924,\n \"acc_stderr\": 0.03547771004159463,\n \"acc_norm\": 0.7938931297709924,\n \"acc_norm_stderr\": 0.03547771004159463\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8181818181818182,\n \"acc_stderr\": 0.03520893951097652,\n \"acc_norm\": 0.8181818181818182,\n \"acc_norm_stderr\": 0.03520893951097652\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8055555555555556,\n \"acc_stderr\": 0.038260763248848646,\n \"acc_norm\": 0.8055555555555556,\n \"acc_norm_stderr\": 0.038260763248848646\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7975460122699386,\n \"acc_stderr\": 0.031570650789119005,\n \"acc_norm\": 0.7975460122699386,\n \"acc_norm_stderr\": 0.031570650789119005\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.04745789978762494,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.04745789978762494\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8349514563106796,\n \"acc_stderr\": 0.036756688322331886,\n \"acc_norm\": 0.8349514563106796,\n \"acc_norm_stderr\": 0.036756688322331886\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8760683760683761,\n \"acc_stderr\": 0.021586494001281382,\n \"acc_norm\": 0.8760683760683761,\n \"acc_norm_stderr\": 0.021586494001281382\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8212005108556832,\n \"acc_stderr\": 0.013702643715368983,\n \"acc_norm\": 0.8212005108556832,\n \"acc_norm_stderr\": 0.013702643715368983\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7398843930635838,\n \"acc_stderr\": 0.023618678310069356,\n \"acc_norm\": 0.7398843930635838,\n \"acc_norm_stderr\": 0.023618678310069356\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3318435754189944,\n \"acc_stderr\": 0.015748421208187303,\n \"acc_norm\": 0.3318435754189944,\n \"acc_norm_stderr\": 0.015748421208187303\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7352941176470589,\n \"acc_stderr\": 0.02526169121972948,\n \"acc_norm\": 0.7352941176470589,\n \"acc_norm_stderr\": 0.02526169121972948\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.707395498392283,\n \"acc_stderr\": 0.025839898334877983,\n \"acc_norm\": 0.707395498392283,\n \"acc_norm_stderr\": 0.025839898334877983\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7561728395061729,\n \"acc_stderr\": 0.023891879541959603,\n \"acc_norm\": 0.7561728395061729,\n \"acc_norm_stderr\": 0.023891879541959603\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4858156028368794,\n \"acc_stderr\": 0.02981549448368206,\n \"acc_norm\": 0.4858156028368794,\n \"acc_norm_stderr\": 0.02981549448368206\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.45827900912646674,\n \"acc_stderr\": 0.01272570165695364,\n \"acc_norm\": 0.45827900912646674,\n \"acc_norm_stderr\": 0.01272570165695364\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6801470588235294,\n \"acc_stderr\": 0.02833295951403121,\n \"acc_norm\": 0.6801470588235294,\n \"acc_norm_stderr\": 0.02833295951403121\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.673202614379085,\n \"acc_stderr\": 0.01897542792050721,\n \"acc_norm\": 0.673202614379085,\n \"acc_norm_stderr\": 0.01897542792050721\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6454545454545455,\n \"acc_stderr\": 0.045820048415054174,\n \"acc_norm\": 0.6454545454545455,\n \"acc_norm_stderr\": 0.045820048415054174\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7551020408163265,\n \"acc_stderr\": 0.027529637440174923,\n \"acc_norm\": 0.7551020408163265,\n \"acc_norm_stderr\": 0.027529637440174923\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8258706467661692,\n \"acc_stderr\": 0.026814951200421603,\n \"acc_norm\": 0.8258706467661692,\n \"acc_norm_stderr\": 0.026814951200421603\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.91,\n \"acc_stderr\": 0.028762349126466136,\n \"acc_norm\": 0.91,\n \"acc_norm_stderr\": 0.028762349126466136\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5542168674698795,\n \"acc_stderr\": 0.03869543323472101,\n \"acc_norm\": 0.5542168674698795,\n \"acc_norm_stderr\": 0.03869543323472101\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.847953216374269,\n \"acc_stderr\": 0.027539122889061456,\n \"acc_norm\": 0.847953216374269,\n \"acc_norm_stderr\": 0.027539122889061456\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3390452876376989,\n \"mc1_stderr\": 0.016571797910626615,\n \"mc2\": 0.49160643723765735,\n \"mc2_stderr\": 0.015188709391608397\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7916337805840569,\n \"acc_stderr\": 0.011414554399987729\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6345716451857468,\n \"acc_stderr\": 0.013264282030266635\n }\n}\n```", "repo_url": "https://huggingface.co/rombodawg/Everyone-Coder-4x7b-Base", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|arc:challenge|25_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|arc:challenge|25_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|gsm8k|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|gsm8k|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hellaswag|10_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hellaswag|10_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T02-37-27.677232.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T17-47-56.627468.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["**/details_harness|winogrande|5_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["**/details_harness|winogrande|5_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-15T17-47-56.627468.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_15T02_37_27.677232", "path": ["results_2024-01-15T02-37-27.677232.parquet"]}, {"split": "2024_01_15T17_47_56.627468", "path": ["results_2024-01-15T17-47-56.627468.parquet"]}, {"split": "latest", "path": ["results_2024-01-15T17-47-56.627468.parquet"]}]}]} | 2024-01-15T17:50:34+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of rombodawg/Everyone-Coder-4x7b-Base
Dataset automatically created during the evaluation run of model rombodawg/Everyone-Coder-4x7b-Base on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-15T17:47:56.627468(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of rombodawg/Everyone-Coder-4x7b-Base\n\n\n\nDataset automatically created during the evaluation run of model rombodawg/Everyone-Coder-4x7b-Base on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T17:47:56.627468(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of rombodawg/Everyone-Coder-4x7b-Base\n\n\n\nDataset automatically created during the evaluation run of model rombodawg/Everyone-Coder-4x7b-Base on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T17:47:56.627468(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
e72481391a699d2e233a3ebac76444cf648888c6 | # Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | ericanzdu/dtest | [
"task_categories:token-classification",
"task_categories:image-to-3d",
"task_ids:language-modeling",
"size_categories:1K<n<10K",
"biology",
"art",
"region:us"
] | 2024-01-15T02:39:56+00:00 | {"size_categories": ["1K<n<10K"], "task_categories": ["token-classification", "image-to-3d"], "task_ids": ["language-modeling", "image-resize"], "tags": ["biology", "art"]} | 2024-01-15T10:15:24+00:00 | [] | [] | TAGS
#task_categories-token-classification #task_categories-image-to-3d #task_ids-language-modeling #size_categories-1K<n<10K #biology #art #region-us
| # Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#task_categories-token-classification #task_categories-image-to-3d #task_ids-language-modeling #size_categories-1K<n<10K #biology #art #region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
2c28f957aa40e766851ad7a3916367c3007d2724 |
# Dataset of kitashirakawa_chiyuri/北白河ちゆり (Touhou)
This is the dataset of kitashirakawa_chiyuri/北白河ちゆり (Touhou), containing 151 images and their tags.
The core tags of this character are `blonde_hair, twintails, hat, sailor_hat, yellow_eyes, white_headwear`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 151 | 130.51 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kitashirakawa_chiyuri_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 151 | 86.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kitashirakawa_chiyuri_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 299 | 171.97 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kitashirakawa_chiyuri_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 151 | 118.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kitashirakawa_chiyuri_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 299 | 225.87 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kitashirakawa_chiyuri_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kitashirakawa_chiyuri_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, blue_sailor_collar, solo, white_shorts, midriff, navel, smile, open_mouth |
| 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 2girls, blue_sailor_collar, midriff, red_hair, short_hair, shorts, navel, folding_chair, smile |
| 2 | 7 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, blue_sailor_collar, medium_hair, sailor_shirt, solo, white_shirt, bangs, blue_neckerchief, blush, upper_body, looking_at_viewer, simple_background, anchor_symbol, happy, white_background, closed_mouth, grin, puffy_short_sleeves |
| 3 | 7 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, blue_sailor_collar, midriff, open_mouth, puffy_short_sleeves, sailor_shirt, solo, white_shirt, white_shorts, anchor_symbol, medium_hair, blue_neckerchief, navel, smile, stomach, blush, folding_chair, happy, looking_at_viewer |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | blue_sailor_collar | solo | white_shorts | midriff | navel | smile | open_mouth | 2girls | red_hair | short_hair | shorts | folding_chair | medium_hair | sailor_shirt | white_shirt | bangs | blue_neckerchief | blush | upper_body | looking_at_viewer | simple_background | anchor_symbol | happy | white_background | closed_mouth | grin | puffy_short_sleeves | stomach |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------------|:-------|:---------------|:----------|:--------|:--------|:-------------|:---------|:-----------|:-------------|:---------|:----------------|:--------------|:---------------|:--------------|:--------|:-------------------|:--------|:-------------|:--------------------|:--------------------|:----------------|:--------|:-------------------|:---------------|:-------|:----------------------|:----------|
| 0 | 8 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | |
| 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | | X | | | X | X | X | | X | X | X | X | X | | | | | | | | | | | | | | | | |
| 2 | 7 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | | | | | | | | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | |
| 3 | 7 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | X | X | X | X | X | X | X | | | | | X | X | X | X | | X | X | | X | | X | X | | | | X | X |
| CyberHarem/kitashirakawa_chiyuri_touhou | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T02:43:20+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T03:28:11+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of kitashirakawa\_chiyuri/北白河ちゆり (Touhou)
=================================================
This is the dataset of kitashirakawa\_chiyuri/北白河ちゆり (Touhou), containing 151 images and their tags.
The core tags of this character are 'blonde\_hair, twintails, hat, sailor\_hat, yellow\_eyes, white\_headwear', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
8179067022ed804338618bb9e666ddffff500e84 | # Dataset Card for "python-github-code-instruct-filtered-5k"
This fine dataset [tomekkorbak/python-github-code](https://huggingface.co/datasets/tomekkorbak/python-github-code), filtered by scores greater than 0.03.
Feedback and additional columns generated through OpenAI and Cohere responses. | jtatman/python-github-code-instruct-filtered-5k | [
"task_categories:text-generation",
"task_categories:question-answering",
"task_categories:conversational",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"Python",
"Code",
"Github",
"region:us"
] | 2024-01-15T02:48:29+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-generation", "question-answering", "conversational"], "pretty_name": "github python filtered by score", "dataset_info": {"features": [{"name": "system", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 23926332, "num_examples": 4502}], "download_size": 9549180, "dataset_size": 23926332}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["Python", "Code", "Github"]} | 2024-01-15T03:16:03+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #task_categories-question-answering #task_categories-conversational #size_categories-1K<n<10K #language-English #license-apache-2.0 #Python #Code #Github #region-us
| # Dataset Card for "python-github-code-instruct-filtered-5k"
This fine dataset tomekkorbak/python-github-code, filtered by scores greater than 0.03.
Feedback and additional columns generated through OpenAI and Cohere responses. | [
"# Dataset Card for \"python-github-code-instruct-filtered-5k\"\n\nThis fine dataset tomekkorbak/python-github-code, filtered by scores greater than 0.03. \n\nFeedback and additional columns generated through OpenAI and Cohere responses."
] | [
"TAGS\n#task_categories-text-generation #task_categories-question-answering #task_categories-conversational #size_categories-1K<n<10K #language-English #license-apache-2.0 #Python #Code #Github #region-us \n",
"# Dataset Card for \"python-github-code-instruct-filtered-5k\"\n\nThis fine dataset tomekkorbak/python-github-code, filtered by scores greater than 0.03. \n\nFeedback and additional columns generated through OpenAI and Cohere responses."
] |
4a20eb1780a3b180934bb7c1b836b647f8d723cb |
# Dataset Card for Evaluation run of ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont1](https://huggingface.co/ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_ewqr2130__alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-15T02:49:27.291692](https://huggingface.co/datasets/open-llm-leaderboard/details_ewqr2130__alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont1/blob/main/results_2024-01-15T02-49-27.291692.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6032096253875614,
"acc_stderr": 0.03321637816759657,
"acc_norm": 0.6097201219482176,
"acc_norm_stderr": 0.033909808173675136,
"mc1": 0.27906976744186046,
"mc1_stderr": 0.015702107090627904,
"mc2": 0.40550458795616723,
"mc2_stderr": 0.015282277248005289
},
"harness|arc:challenge|25": {
"acc": 0.5614334470989761,
"acc_stderr": 0.014500682618212865,
"acc_norm": 0.6023890784982935,
"acc_norm_stderr": 0.01430175222327954
},
"harness|hellaswag|10": {
"acc": 0.6253734315873332,
"acc_stderr": 0.00483037131784105,
"acc_norm": 0.8228440549691296,
"acc_norm_stderr": 0.003810203308901103
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5925925925925926,
"acc_stderr": 0.04244633238353228,
"acc_norm": 0.5925925925925926,
"acc_norm_stderr": 0.04244633238353228
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.625,
"acc_stderr": 0.039397364351956274,
"acc_norm": 0.625,
"acc_norm_stderr": 0.039397364351956274
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6641509433962264,
"acc_stderr": 0.029067220146644826,
"acc_norm": 0.6641509433962264,
"acc_norm_stderr": 0.029067220146644826
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6805555555555556,
"acc_stderr": 0.038990736873573344,
"acc_norm": 0.6805555555555556,
"acc_norm_stderr": 0.038990736873573344
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.4,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.4,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6184971098265896,
"acc_stderr": 0.03703851193099521,
"acc_norm": 0.6184971098265896,
"acc_norm_stderr": 0.03703851193099521
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4117647058823529,
"acc_stderr": 0.04897104952726366,
"acc_norm": 0.4117647058823529,
"acc_norm_stderr": 0.04897104952726366
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932261,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932261
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5446808510638298,
"acc_stderr": 0.03255525359340355,
"acc_norm": 0.5446808510638298,
"acc_norm_stderr": 0.03255525359340355
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.40350877192982454,
"acc_stderr": 0.046151869625837026,
"acc_norm": 0.40350877192982454,
"acc_norm_stderr": 0.046151869625837026
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5793103448275863,
"acc_stderr": 0.04113914981189261,
"acc_norm": 0.5793103448275863,
"acc_norm_stderr": 0.04113914981189261
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3941798941798942,
"acc_stderr": 0.025167982333894143,
"acc_norm": 0.3941798941798942,
"acc_norm_stderr": 0.025167982333894143
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.373015873015873,
"acc_stderr": 0.04325506042017086,
"acc_norm": 0.373015873015873,
"acc_norm_stderr": 0.04325506042017086
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.41,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.41,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7225806451612903,
"acc_stderr": 0.025470196835900055,
"acc_norm": 0.7225806451612903,
"acc_norm_stderr": 0.025470196835900055
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.035158955511656986,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.035158955511656986
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7212121212121212,
"acc_stderr": 0.035014387062967806,
"acc_norm": 0.7212121212121212,
"acc_norm_stderr": 0.035014387062967806
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7424242424242424,
"acc_stderr": 0.03115626951964684,
"acc_norm": 0.7424242424242424,
"acc_norm_stderr": 0.03115626951964684
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8031088082901554,
"acc_stderr": 0.028697873971860677,
"acc_norm": 0.8031088082901554,
"acc_norm_stderr": 0.028697873971860677
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5897435897435898,
"acc_stderr": 0.0249393139069408,
"acc_norm": 0.5897435897435898,
"acc_norm_stderr": 0.0249393139069408
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3296296296296296,
"acc_stderr": 0.028661201116524572,
"acc_norm": 0.3296296296296296,
"acc_norm_stderr": 0.028661201116524572
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6554621848739496,
"acc_stderr": 0.030868682604121622,
"acc_norm": 0.6554621848739496,
"acc_norm_stderr": 0.030868682604121622
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.32450331125827814,
"acc_stderr": 0.03822746937658753,
"acc_norm": 0.32450331125827814,
"acc_norm_stderr": 0.03822746937658753
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8036697247706422,
"acc_stderr": 0.017030719339154343,
"acc_norm": 0.8036697247706422,
"acc_norm_stderr": 0.017030719339154343
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4675925925925926,
"acc_stderr": 0.03402801581358966,
"acc_norm": 0.4675925925925926,
"acc_norm_stderr": 0.03402801581358966
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7254901960784313,
"acc_stderr": 0.03132179803083291,
"acc_norm": 0.7254901960784313,
"acc_norm_stderr": 0.03132179803083291
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7215189873417721,
"acc_stderr": 0.029178682304842548,
"acc_norm": 0.7215189873417721,
"acc_norm_stderr": 0.029178682304842548
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6547085201793722,
"acc_stderr": 0.03191100192835794,
"acc_norm": 0.6547085201793722,
"acc_norm_stderr": 0.03191100192835794
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.648854961832061,
"acc_stderr": 0.04186445163013751,
"acc_norm": 0.648854961832061,
"acc_norm_stderr": 0.04186445163013751
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8016528925619835,
"acc_stderr": 0.036401182719909456,
"acc_norm": 0.8016528925619835,
"acc_norm_stderr": 0.036401182719909456
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252626,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252626
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7239263803680982,
"acc_stderr": 0.035123852837050475,
"acc_norm": 0.7239263803680982,
"acc_norm_stderr": 0.035123852837050475
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4732142857142857,
"acc_stderr": 0.047389751192741546,
"acc_norm": 0.4732142857142857,
"acc_norm_stderr": 0.047389751192741546
},
"harness|hendrycksTest-management|5": {
"acc": 0.7572815533980582,
"acc_stderr": 0.04245022486384495,
"acc_norm": 0.7572815533980582,
"acc_norm_stderr": 0.04245022486384495
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.024414947304543674,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.024414947304543674
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7841634738186463,
"acc_stderr": 0.01471168438613996,
"acc_norm": 0.7841634738186463,
"acc_norm_stderr": 0.01471168438613996
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7023121387283237,
"acc_stderr": 0.024617055388676996,
"acc_norm": 0.7023121387283237,
"acc_norm_stderr": 0.024617055388676996
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3776536312849162,
"acc_stderr": 0.016214148752136632,
"acc_norm": 0.3776536312849162,
"acc_norm_stderr": 0.016214148752136632
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6535947712418301,
"acc_stderr": 0.02724561304721536,
"acc_norm": 0.6535947712418301,
"acc_norm_stderr": 0.02724561304721536
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6945337620578779,
"acc_stderr": 0.026160584450140453,
"acc_norm": 0.6945337620578779,
"acc_norm_stderr": 0.026160584450140453
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6728395061728395,
"acc_stderr": 0.026105673861409828,
"acc_norm": 0.6728395061728395,
"acc_norm_stderr": 0.026105673861409828
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.45390070921985815,
"acc_stderr": 0.029700453247291467,
"acc_norm": 0.45390070921985815,
"acc_norm_stderr": 0.029700453247291467
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4211212516297262,
"acc_stderr": 0.012610325733489905,
"acc_norm": 0.4211212516297262,
"acc_norm_stderr": 0.012610325733489905
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6066176470588235,
"acc_stderr": 0.029674288281311155,
"acc_norm": 0.6066176470588235,
"acc_norm_stderr": 0.029674288281311155
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6127450980392157,
"acc_stderr": 0.019706875804085637,
"acc_norm": 0.6127450980392157,
"acc_norm_stderr": 0.019706875804085637
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.5818181818181818,
"acc_stderr": 0.04724577405731572,
"acc_norm": 0.5818181818181818,
"acc_norm_stderr": 0.04724577405731572
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6857142857142857,
"acc_stderr": 0.029719329422417475,
"acc_norm": 0.6857142857142857,
"acc_norm_stderr": 0.029719329422417475
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8109452736318408,
"acc_stderr": 0.02768691358801301,
"acc_norm": 0.8109452736318408,
"acc_norm_stderr": 0.02768691358801301
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.82,
"acc_stderr": 0.03861229196653693,
"acc_norm": 0.82,
"acc_norm_stderr": 0.03861229196653693
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5120481927710844,
"acc_stderr": 0.03891364495835817,
"acc_norm": 0.5120481927710844,
"acc_norm_stderr": 0.03891364495835817
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7953216374269005,
"acc_stderr": 0.030944459778533214,
"acc_norm": 0.7953216374269005,
"acc_norm_stderr": 0.030944459778533214
},
"harness|truthfulqa:mc|0": {
"mc1": 0.27906976744186046,
"mc1_stderr": 0.015702107090627904,
"mc2": 0.40550458795616723,
"mc2_stderr": 0.015282277248005289
},
"harness|winogrande|5": {
"acc": 0.771112865035517,
"acc_stderr": 0.011807360224025405
},
"harness|gsm8k|5": {
"acc": 0.2896133434420015,
"acc_stderr": 0.012493927348659629
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_ewqr2130__alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont1 | [
"region:us"
] | 2024-01-15T02:51:46+00:00 | {"pretty_name": "Evaluation run of ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont1", "dataset_summary": "Dataset automatically created during the evaluation run of model [ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont1](https://huggingface.co/ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ewqr2130__alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-15T02:49:27.291692](https://huggingface.co/datasets/open-llm-leaderboard/details_ewqr2130__alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont1/blob/main/results_2024-01-15T02-49-27.291692.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6032096253875614,\n \"acc_stderr\": 0.03321637816759657,\n \"acc_norm\": 0.6097201219482176,\n \"acc_norm_stderr\": 0.033909808173675136,\n \"mc1\": 0.27906976744186046,\n \"mc1_stderr\": 0.015702107090627904,\n \"mc2\": 0.40550458795616723,\n \"mc2_stderr\": 0.015282277248005289\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5614334470989761,\n \"acc_stderr\": 0.014500682618212865,\n \"acc_norm\": 0.6023890784982935,\n \"acc_norm_stderr\": 0.01430175222327954\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6253734315873332,\n \"acc_stderr\": 0.00483037131784105,\n \"acc_norm\": 0.8228440549691296,\n \"acc_norm_stderr\": 0.003810203308901103\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5925925925925926,\n \"acc_stderr\": 0.04244633238353228,\n \"acc_norm\": 0.5925925925925926,\n \"acc_norm_stderr\": 0.04244633238353228\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.625,\n \"acc_stderr\": 0.039397364351956274,\n \"acc_norm\": 0.625,\n \"acc_norm_stderr\": 0.039397364351956274\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6641509433962264,\n \"acc_stderr\": 0.029067220146644826,\n \"acc_norm\": 0.6641509433962264,\n \"acc_norm_stderr\": 0.029067220146644826\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6805555555555556,\n \"acc_stderr\": 0.038990736873573344,\n \"acc_norm\": 0.6805555555555556,\n \"acc_norm_stderr\": 0.038990736873573344\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6184971098265896,\n \"acc_stderr\": 0.03703851193099521,\n \"acc_norm\": 0.6184971098265896,\n \"acc_norm_stderr\": 0.03703851193099521\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4117647058823529,\n \"acc_stderr\": 0.04897104952726366,\n \"acc_norm\": 0.4117647058823529,\n \"acc_norm_stderr\": 0.04897104952726366\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932261,\n \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932261\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5446808510638298,\n \"acc_stderr\": 0.03255525359340355,\n \"acc_norm\": 0.5446808510638298,\n \"acc_norm_stderr\": 0.03255525359340355\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.40350877192982454,\n \"acc_stderr\": 0.046151869625837026,\n \"acc_norm\": 0.40350877192982454,\n \"acc_norm_stderr\": 0.046151869625837026\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5793103448275863,\n \"acc_stderr\": 0.04113914981189261,\n \"acc_norm\": 0.5793103448275863,\n \"acc_norm_stderr\": 0.04113914981189261\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3941798941798942,\n \"acc_stderr\": 0.025167982333894143,\n \"acc_norm\": 0.3941798941798942,\n \"acc_norm_stderr\": 0.025167982333894143\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.373015873015873,\n \"acc_stderr\": 0.04325506042017086,\n \"acc_norm\": 0.373015873015873,\n \"acc_norm_stderr\": 0.04325506042017086\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7225806451612903,\n \"acc_stderr\": 0.025470196835900055,\n \"acc_norm\": 0.7225806451612903,\n \"acc_norm_stderr\": 0.025470196835900055\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.035158955511656986,\n \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.035158955511656986\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7212121212121212,\n \"acc_stderr\": 0.035014387062967806,\n \"acc_norm\": 0.7212121212121212,\n \"acc_norm_stderr\": 0.035014387062967806\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7424242424242424,\n \"acc_stderr\": 0.03115626951964684,\n \"acc_norm\": 0.7424242424242424,\n \"acc_norm_stderr\": 0.03115626951964684\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8031088082901554,\n \"acc_stderr\": 0.028697873971860677,\n \"acc_norm\": 0.8031088082901554,\n \"acc_norm_stderr\": 0.028697873971860677\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5897435897435898,\n \"acc_stderr\": 0.0249393139069408,\n \"acc_norm\": 0.5897435897435898,\n \"acc_norm_stderr\": 0.0249393139069408\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3296296296296296,\n \"acc_stderr\": 0.028661201116524572,\n \"acc_norm\": 0.3296296296296296,\n \"acc_norm_stderr\": 0.028661201116524572\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6554621848739496,\n \"acc_stderr\": 0.030868682604121622,\n \"acc_norm\": 0.6554621848739496,\n \"acc_norm_stderr\": 0.030868682604121622\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.32450331125827814,\n \"acc_stderr\": 0.03822746937658753,\n \"acc_norm\": 0.32450331125827814,\n \"acc_norm_stderr\": 0.03822746937658753\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8036697247706422,\n \"acc_stderr\": 0.017030719339154343,\n \"acc_norm\": 0.8036697247706422,\n \"acc_norm_stderr\": 0.017030719339154343\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4675925925925926,\n \"acc_stderr\": 0.03402801581358966,\n \"acc_norm\": 0.4675925925925926,\n \"acc_norm_stderr\": 0.03402801581358966\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7254901960784313,\n \"acc_stderr\": 0.03132179803083291,\n \"acc_norm\": 0.7254901960784313,\n \"acc_norm_stderr\": 0.03132179803083291\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7215189873417721,\n \"acc_stderr\": 0.029178682304842548,\n \"acc_norm\": 0.7215189873417721,\n \"acc_norm_stderr\": 0.029178682304842548\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6547085201793722,\n \"acc_stderr\": 0.03191100192835794,\n \"acc_norm\": 0.6547085201793722,\n \"acc_norm_stderr\": 0.03191100192835794\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.648854961832061,\n \"acc_stderr\": 0.04186445163013751,\n \"acc_norm\": 0.648854961832061,\n \"acc_norm_stderr\": 0.04186445163013751\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8016528925619835,\n \"acc_stderr\": 0.036401182719909456,\n \"acc_norm\": 0.8016528925619835,\n \"acc_norm_stderr\": 0.036401182719909456\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n \"acc_stderr\": 0.04077494709252626,\n \"acc_norm\": 0.7685185185185185,\n \"acc_norm_stderr\": 0.04077494709252626\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7239263803680982,\n \"acc_stderr\": 0.035123852837050475,\n \"acc_norm\": 0.7239263803680982,\n \"acc_norm_stderr\": 0.035123852837050475\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4732142857142857,\n \"acc_stderr\": 0.047389751192741546,\n \"acc_norm\": 0.4732142857142857,\n \"acc_norm_stderr\": 0.047389751192741546\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7572815533980582,\n \"acc_stderr\": 0.04245022486384495,\n \"acc_norm\": 0.7572815533980582,\n \"acc_norm_stderr\": 0.04245022486384495\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.024414947304543674,\n \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.024414947304543674\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7841634738186463,\n \"acc_stderr\": 0.01471168438613996,\n \"acc_norm\": 0.7841634738186463,\n \"acc_norm_stderr\": 0.01471168438613996\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7023121387283237,\n \"acc_stderr\": 0.024617055388676996,\n \"acc_norm\": 0.7023121387283237,\n \"acc_norm_stderr\": 0.024617055388676996\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3776536312849162,\n \"acc_stderr\": 0.016214148752136632,\n \"acc_norm\": 0.3776536312849162,\n \"acc_norm_stderr\": 0.016214148752136632\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6535947712418301,\n \"acc_stderr\": 0.02724561304721536,\n \"acc_norm\": 0.6535947712418301,\n \"acc_norm_stderr\": 0.02724561304721536\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6945337620578779,\n \"acc_stderr\": 0.026160584450140453,\n \"acc_norm\": 0.6945337620578779,\n \"acc_norm_stderr\": 0.026160584450140453\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6728395061728395,\n \"acc_stderr\": 0.026105673861409828,\n \"acc_norm\": 0.6728395061728395,\n \"acc_norm_stderr\": 0.026105673861409828\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.45390070921985815,\n \"acc_stderr\": 0.029700453247291467,\n \"acc_norm\": 0.45390070921985815,\n \"acc_norm_stderr\": 0.029700453247291467\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4211212516297262,\n \"acc_stderr\": 0.012610325733489905,\n \"acc_norm\": 0.4211212516297262,\n \"acc_norm_stderr\": 0.012610325733489905\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6066176470588235,\n \"acc_stderr\": 0.029674288281311155,\n \"acc_norm\": 0.6066176470588235,\n \"acc_norm_stderr\": 0.029674288281311155\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6127450980392157,\n \"acc_stderr\": 0.019706875804085637,\n \"acc_norm\": 0.6127450980392157,\n \"acc_norm_stderr\": 0.019706875804085637\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.5818181818181818,\n \"acc_stderr\": 0.04724577405731572,\n \"acc_norm\": 0.5818181818181818,\n \"acc_norm_stderr\": 0.04724577405731572\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6857142857142857,\n \"acc_stderr\": 0.029719329422417475,\n \"acc_norm\": 0.6857142857142857,\n \"acc_norm_stderr\": 0.029719329422417475\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8109452736318408,\n \"acc_stderr\": 0.02768691358801301,\n \"acc_norm\": 0.8109452736318408,\n \"acc_norm_stderr\": 0.02768691358801301\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.82,\n \"acc_stderr\": 0.03861229196653693,\n \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.03861229196653693\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5120481927710844,\n \"acc_stderr\": 0.03891364495835817,\n \"acc_norm\": 0.5120481927710844,\n \"acc_norm_stderr\": 0.03891364495835817\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7953216374269005,\n \"acc_stderr\": 0.030944459778533214,\n \"acc_norm\": 0.7953216374269005,\n \"acc_norm_stderr\": 0.030944459778533214\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.27906976744186046,\n \"mc1_stderr\": 0.015702107090627904,\n \"mc2\": 0.40550458795616723,\n \"mc2_stderr\": 0.015282277248005289\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.771112865035517,\n \"acc_stderr\": 0.011807360224025405\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.2896133434420015,\n \"acc_stderr\": 0.012493927348659629\n }\n}\n```", "repo_url": "https://huggingface.co/ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|arc:challenge|25_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|gsm8k|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hellaswag|10_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T02-49-27.291692.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["**/details_harness|winogrande|5_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-15T02-49-27.291692.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_15T02_49_27.291692", "path": ["results_2024-01-15T02-49-27.291692.parquet"]}, {"split": "latest", "path": ["results_2024-01-15T02-49-27.291692.parquet"]}]}]} | 2024-01-15T02:52:08+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont1
Dataset automatically created during the evaluation run of model ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont1 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-15T02:49:27.291692(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont1\n\n\n\nDataset automatically created during the evaluation run of model ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T02:49:27.291692(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont1\n\n\n\nDataset automatically created during the evaluation run of model ewqr2130/alignment-handbook-zephyr-7b-sft-full-dpo-5e7-cont1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T02:49:27.291692(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
99062fec1c4512b005bb695a600f25b3ed440585 | #reddit demo datasets
| johncbertrand/reddit-demo | [
"region:us"
] | 2024-01-15T03:13:06+00:00 | {} | 2024-01-15T18:23:09+00:00 | [] | [] | TAGS
#region-us
| #reddit demo datasets
| [] | [
"TAGS\n#region-us \n"
] |
e46bb0b428cbf86bd69f2a35c7df5cb5e5fb35e4 |
# Dataset of sariel/サリエル (Touhou)
This is the dataset of sariel/サリエル (Touhou), containing 45 images and their tags.
The core tags of this character are `long_hair, wings, multiple_wings, angel_wings, very_long_hair, blue_hair, red_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:---------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 45 | 42.55 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sariel_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 45 | 29.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sariel_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 73 | 44.82 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sariel_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 45 | 39.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sariel_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 73 | 56.23 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sariel_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/sariel_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, staff, closed_eyes, long_sleeves, blue_dress, smile |
| 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, blue_dress, long_sleeves, solo, breasts, closed_mouth, feathered_wings, looking_at_viewer, smile, white_wings, wide_sleeves, holding, angel, bangs, blush, staff, white_shirt |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | staff | closed_eyes | long_sleeves | blue_dress | smile | breasts | closed_mouth | feathered_wings | looking_at_viewer | white_wings | wide_sleeves | holding | angel | bangs | blush | white_shirt |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------|:--------------|:---------------|:-------------|:--------|:----------|:---------------|:------------------|:--------------------|:--------------|:---------------|:----------|:--------|:--------|:--------|:--------------|
| 0 | 8 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | | | | | | | | | | | |
| 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/sariel_touhou | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T03:19:21+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T03:48:53+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of sariel/サリエル (Touhou)
===============================
This is the dataset of sariel/サリエル (Touhou), containing 45 images and their tags.
The core tags of this character are 'long\_hair, wings, multiple\_wings, angel\_wings, very\_long\_hair, blue\_hair, red\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
2c16b9c01d714a79e0d061570afadf1aa51e0de8 |
# Dataset of satsuki_rin/冴月麟 (Touhou)
This is the dataset of satsuki_rin/冴月麟 (Touhou), containing 10 images and their tags.
The core tags of this character are `blonde_hair, ribbon, bow, hair_bow, short_hair, yellow_eyes, hair_ornament, hair_ribbon, red_bow, red_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:--------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 10 | 9.68 MiB | [Download](https://huggingface.co/datasets/CyberHarem/satsuki_rin_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 10 | 5.41 MiB | [Download](https://huggingface.co/datasets/CyberHarem/satsuki_rin_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 15 | 8.07 MiB | [Download](https://huggingface.co/datasets/CyberHarem/satsuki_rin_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 10 | 8.22 MiB | [Download](https://huggingface.co/datasets/CyberHarem/satsuki_rin_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 15 | 12.60 MiB | [Download](https://huggingface.co/datasets/CyberHarem/satsuki_rin_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/satsuki_rin_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------|
| 0 | 10 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, instrument, smile, long_sleeves, frills, skirt |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | instrument | smile | long_sleeves | frills | skirt |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:-------------|:--------|:---------------|:---------|:--------|
| 0 | 10 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X |
| CyberHarem/satsuki_rin_touhou | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T03:19:21+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T03:23:57+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of satsuki\_rin/冴月麟 (Touhou)
====================================
This is the dataset of satsuki\_rin/冴月麟 (Touhou), containing 10 images and their tags.
The core tags of this character are 'blonde\_hair, ribbon, bow, hair\_bow, short\_hair, yellow\_eyes, hair\_ornament, hair\_ribbon, red\_bow, red\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
ae93d58655cdce5ab325dd1647b46218c1f0e296 |
# Dataset of sakata_nemuno/坂田ネムノ (Touhou)
This is the dataset of sakata_nemuno/坂田ネムノ (Touhou), containing 257 images and their tags.
The core tags of this character are `long_hair, red_eyes, grey_hair, breasts, wavy_hair, very_long_hair, large_breasts, bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 257 | 275.41 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakata_nemuno_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 257 | 173.46 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakata_nemuno_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 557 | 342.25 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakata_nemuno_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 257 | 251.98 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakata_nemuno_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 557 | 457.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sakata_nemuno_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/sakata_nemuno_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 16 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, bare_shoulders, detached_sleeves, looking_at_viewer, multicolored_dress, nata_(tool), single_strap, solo, holding_weapon, orange_dress, yellow_dress, collarbone, simple_background, barefoot, closed_mouth, full_body, white_background, blue_sleeves, cleaver, medium_breasts, smile, standing |
| 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, barefoot, detached_sleeves, full_body, holding, looking_at_viewer, multicolored_dress, nata_(tool), single_strap, solo, bare_shoulders, open_mouth, weapon, blue_sleeves, smile, standing, yellow_dress |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | bare_shoulders | detached_sleeves | looking_at_viewer | multicolored_dress | nata_(tool) | single_strap | solo | holding_weapon | orange_dress | yellow_dress | collarbone | simple_background | barefoot | closed_mouth | full_body | white_background | blue_sleeves | cleaver | medium_breasts | smile | standing | holding | open_mouth | weapon |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------------|:-------------------|:--------------------|:---------------------|:--------------|:---------------|:-------|:-----------------|:---------------|:---------------|:-------------|:--------------------|:-----------|:---------------|:------------|:-------------------|:---------------|:----------|:-----------------|:--------|:-----------|:----------|:-------------|:---------|
| 0 | 16 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | |
| 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | X | X | X | X | X | | | X | | | X | | X | | X | | | X | X | X | X | X |
| CyberHarem/sakata_nemuno_touhou | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T03:20:43+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T04:30:01+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of sakata\_nemuno/坂田ネムノ (Touhou)
========================================
This is the dataset of sakata\_nemuno/坂田ネムノ (Touhou), containing 257 images and their tags.
The core tags of this character are 'long\_hair, red\_eyes, grey\_hair, breasts, wavy\_hair, very\_long\_hair, large\_breasts, bangs', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
134be630da19d09ff1ae1675499690b3ba8ef17c |
# Dataset of kotohime/ことひめ/小兎姫 (Touhou)
This is the dataset of kotohime/ことひめ/小兎姫 (Touhou), containing 78 images and their tags.
The core tags of this character are `long_hair, red_hair, red_eyes, bow, hair_bow`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 78 | 65.24 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kotohime_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 78 | 46.11 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kotohime_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 142 | 79.75 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kotohime_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 78 | 60.31 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kotohime_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 142 | 98.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/kotohime_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/kotohime_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 20 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, kimono, solo, smile, ponytail, sash |
| 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, long_sleeves, solo, wide_sleeves, bangs, looking_at_viewer, simple_background, smile, yellow_bow, closed_mouth, purple_kimono, white_background, white_kimono, obi, sidelocks |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | kimono | solo | smile | ponytail | sash | long_sleeves | wide_sleeves | bangs | looking_at_viewer | simple_background | yellow_bow | closed_mouth | purple_kimono | white_background | white_kimono | obi | sidelocks |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------|:-------|:--------|:-----------|:-------|:---------------|:---------------|:--------|:--------------------|:--------------------|:-------------|:---------------|:----------------|:-------------------|:---------------|:------|:------------|
| 0 | 20 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | | | | | | | | | | | | |
| 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | X | X | | | X | X | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/kotohime_touhou | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T04:21:32+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T04:48:09+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of kotohime/ことひめ/小兎姫 (Touhou)
=====================================
This is the dataset of kotohime/ことひめ/小兎姫 (Touhou), containing 78 images and their tags.
The core tags of this character are 'long\_hair, red\_hair, red\_eyes, bow, hair\_bow', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
37e53ba0e3cf7cb8e42d1e2f5deb3adc128206ff |
# Dataset of luize/ルイズ (Touhou)
This is the dataset of luize/ルイズ (Touhou), containing 90 images and their tags.
The core tags of this character are `blonde_hair, hat, yellow_eyes, short_hair, ribbon, twintails, bow, white_headwear`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:--------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 90 | 54.99 MiB | [Download](https://huggingface.co/datasets/CyberHarem/luize_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 90 | 40.28 MiB | [Download](https://huggingface.co/datasets/CyberHarem/luize_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 130 | 66.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/luize_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 90 | 51.38 MiB | [Download](https://huggingface.co/datasets/CyberHarem/luize_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 130 | 82.73 MiB | [Download](https://huggingface.co/datasets/CyberHarem/luize_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/luize_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 17 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, purple_neckerchief, purple_sailor_collar, short_sleeves, solo, smile, sun_hat, white_shirt, white_skirt, hat_bow, bangs, purple_bow, closed_eyes, medium_hair, closed_mouth, full_body, happy, low_twintails, looking_at_viewer, blush, breasts, open_mouth, simple_background |
| 1 | 14 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, solo, smile, dress, closed_eyes, simple_background, white_background, sailor_collar |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | purple_neckerchief | purple_sailor_collar | short_sleeves | solo | smile | sun_hat | white_shirt | white_skirt | hat_bow | bangs | purple_bow | closed_eyes | medium_hair | closed_mouth | full_body | happy | low_twintails | looking_at_viewer | blush | breasts | open_mouth | simple_background | dress | white_background | sailor_collar |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------------------|:-----------------------|:----------------|:-------|:--------|:----------|:--------------|:--------------|:----------|:--------|:-------------|:--------------|:--------------|:---------------|:------------|:--------|:----------------|:--------------------|:--------|:----------|:-------------|:--------------------|:--------|:-------------------|:----------------|
| 0 | 17 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | |
| 1 | 14 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | | | X | X | | | | | | | X | | | | | | | | | | X | X | X | X |
| CyberHarem/luize_touhou | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T04:21:37+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T04:45:18+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of luize/ルイズ (Touhou)
=============================
This is the dataset of luize/ルイズ (Touhou), containing 90 images and their tags.
The core tags of this character are 'blonde\_hair, hat, yellow\_eyes, short\_hair, ribbon, twintails, bow, white\_headwear', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
6066e9c6f9e75e18f3625a551087bd44fe8a84e0 |
# Quirky Textbook Trove: Compact Excellence for Small Language Model
Strange dataset is 100% AI-generated, a compilation aligned with the vision of the [Textbooks Are All You Need](https://arxiv.org/abs/2306.11644) and [Textbooks Are All You Need II: phi-1.5 technical report](https://arxiv.org/abs/2309.05463) research. This dataset features 2,7M synthetic textbooks, encapsulating 16GB of raw text data. The unique name reflects its unconventional synthesis methodology, its compact size, deduped, and its emphasis on clear, focused content.
The dataset comprises text documents, each representing a tiny synthetic textbook. The source of this data is advanced open LLM-generated text, ensuring a high-quality, structured representation across a diverse range of subjects.
## Motivation
The creation of the dataset is driven by the need for high-quality, efficient training data. By emulating the principles outlined in the paper, this dataset aims to contribute to the development of more efficient language models that can achieve remarkable performance with less data.
## Usage
Researchers and AI practitioners can leverage this dataset for experiments in language model training, particularly those focused on the efficiency and efficacy of models trained on structured, high-quality data.
### Text Length Distribution
The textbooks in this dataset exhibit the following characteristics in terms of text length (measured in characters):
- **Mean**: 6,456.23
- **Standard Deviation**: 2,559.61
- **25th Percentile**: 4,831
- **Median (50th Percentile)**: 6,265
- **75th Percentile**: 8,048
These statistics indicate a varied range of text lengths, providing a comprehensive dataset suitable for diverse applications in language model training.
## Contribution
Contributions to the dataset are encouraged and valued. Enhancements can range from adding new textbooks to optimizing existing content for better quality and diversity.
## Acknowledgments
The development of this dataset was inspired by the groundbreaking work presented in the paper. I acknowledge the contribution of all the community members and the original authors (Microsoft Research) who have influenced this project.
### Disclaimer
While every effort has been made to ensure the accuracy of the information contained within this dataset, please note that it is provided 'as is' and without any warranties.
The use of the data is intended for research purposes only. You are advised to verify any information obtained from this dataset before acting upon it.
## Tiny Series
Explore the possibilities and limitations of building Small Language Models with these tiny gems of data!
- [TinyStories](https://arxiv.org/abs/2305.07759): The paper that sparked my interest in the journey of the tiny-* series.
- [tiny-codes](https://huggingface.co/datasets/nampdn-ai/tiny-codes): Collection of 1.6M short and clear code snippets that can help LLM models learn how to reason.
- [tiny-textbooks](https://huggingface.co/datasets/nampdn-ai/tiny-textbooks): 420k "things of internet" synthetic textbooks.
- [tiny-code-textbooks](https://huggingface.co/datasets/nampdn-ai/tiny-code-textbooks): Collection of 207k code explanation synthetic textbooks.
- [tiny-math-textbooks](https://huggingface.co/datasets/nampdn-ai/tiny-math-textbooks): Collection of 635k short math textbook on various mathematical topics.
- [tiny-orca-textbooks](https://huggingface.co/datasets/nampdn-ai/tiny-orca-textbooks): Synthetic textbook to help model learn in-context on how it should perform task the right way.
- [tiny-webtext](https://huggingface.co/datasets/nampdn-ai/tiny-webtext): A 6GB (4.5M records) variety of diverse webtext enriched with critical thinking methods to make unbiased English dataset.
- [tiny-lessons](https://huggingface.co/datasets/nampdn-ai/tiny-lessons): Subset of tiny-textbooks dataset, various lessons about "things of internet" augmented in a bite-sized textbook Markdown format.
- [tiny-bridgedict](https://huggingface.co/datasets/nampdn-ai/tiny-bridgedict): A dataset that links and transfers knowledge between English, Vietnamese, Chinese in a tiny multilingual models.
## Citation
```
@misc {nam_pham_2024,
author = { {Nam Pham} },
title = { tiny-strange-textbooks (Revision 6f304f1) },
year = 2024,
url = { https://huggingface.co/datasets/nampdn-ai/tiny-strange-textbooks },
doi = { 10.57967/hf/1612 },
publisher = { Hugging Face }
}
``` | nampdn-ai/tiny-strange-textbooks | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:en",
"license:apache-2.0",
"synthetic",
"arxiv:2306.11644",
"arxiv:2309.05463",
"arxiv:2305.07759",
"doi:10.57967/hf/1612",
"region:us"
] | 2024-01-15T04:39:00+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["1M<n<10M"], "task_categories": ["text-generation"], "pretty_name": "Tiny Strange Textbooks", "tags": ["synthetic"]} | 2024-02-02T16:15:23+00:00 | [
"2306.11644",
"2309.05463",
"2305.07759"
] | [
"en"
] | TAGS
#task_categories-text-generation #size_categories-1M<n<10M #language-English #license-apache-2.0 #synthetic #arxiv-2306.11644 #arxiv-2309.05463 #arxiv-2305.07759 #doi-10.57967/hf/1612 #region-us
|
# Quirky Textbook Trove: Compact Excellence for Small Language Model
Strange dataset is 100% AI-generated, a compilation aligned with the vision of the Textbooks Are All You Need and Textbooks Are All You Need II: phi-1.5 technical report research. This dataset features 2,7M synthetic textbooks, encapsulating 16GB of raw text data. The unique name reflects its unconventional synthesis methodology, its compact size, deduped, and its emphasis on clear, focused content.
The dataset comprises text documents, each representing a tiny synthetic textbook. The source of this data is advanced open LLM-generated text, ensuring a high-quality, structured representation across a diverse range of subjects.
## Motivation
The creation of the dataset is driven by the need for high-quality, efficient training data. By emulating the principles outlined in the paper, this dataset aims to contribute to the development of more efficient language models that can achieve remarkable performance with less data.
## Usage
Researchers and AI practitioners can leverage this dataset for experiments in language model training, particularly those focused on the efficiency and efficacy of models trained on structured, high-quality data.
### Text Length Distribution
The textbooks in this dataset exhibit the following characteristics in terms of text length (measured in characters):
- Mean: 6,456.23
- Standard Deviation: 2,559.61
- 25th Percentile: 4,831
- Median (50th Percentile): 6,265
- 75th Percentile: 8,048
These statistics indicate a varied range of text lengths, providing a comprehensive dataset suitable for diverse applications in language model training.
## Contribution
Contributions to the dataset are encouraged and valued. Enhancements can range from adding new textbooks to optimizing existing content for better quality and diversity.
## Acknowledgments
The development of this dataset was inspired by the groundbreaking work presented in the paper. I acknowledge the contribution of all the community members and the original authors (Microsoft Research) who have influenced this project.
### Disclaimer
While every effort has been made to ensure the accuracy of the information contained within this dataset, please note that it is provided 'as is' and without any warranties.
The use of the data is intended for research purposes only. You are advised to verify any information obtained from this dataset before acting upon it.
## Tiny Series
Explore the possibilities and limitations of building Small Language Models with these tiny gems of data!
- TinyStories: The paper that sparked my interest in the journey of the tiny-* series.
- tiny-codes: Collection of 1.6M short and clear code snippets that can help LLM models learn how to reason.
- tiny-textbooks: 420k "things of internet" synthetic textbooks.
- tiny-code-textbooks: Collection of 207k code explanation synthetic textbooks.
- tiny-math-textbooks: Collection of 635k short math textbook on various mathematical topics.
- tiny-orca-textbooks: Synthetic textbook to help model learn in-context on how it should perform task the right way.
- tiny-webtext: A 6GB (4.5M records) variety of diverse webtext enriched with critical thinking methods to make unbiased English dataset.
- tiny-lessons: Subset of tiny-textbooks dataset, various lessons about "things of internet" augmented in a bite-sized textbook Markdown format.
- tiny-bridgedict: A dataset that links and transfers knowledge between English, Vietnamese, Chinese in a tiny multilingual models.
| [
"# Quirky Textbook Trove: Compact Excellence for Small Language Model\n\nStrange dataset is 100% AI-generated, a compilation aligned with the vision of the Textbooks Are All You Need and Textbooks Are All You Need II: phi-1.5 technical report research. This dataset features 2,7M synthetic textbooks, encapsulating 16GB of raw text data. The unique name reflects its unconventional synthesis methodology, its compact size, deduped, and its emphasis on clear, focused content.\n\nThe dataset comprises text documents, each representing a tiny synthetic textbook. The source of this data is advanced open LLM-generated text, ensuring a high-quality, structured representation across a diverse range of subjects.",
"## Motivation\nThe creation of the dataset is driven by the need for high-quality, efficient training data. By emulating the principles outlined in the paper, this dataset aims to contribute to the development of more efficient language models that can achieve remarkable performance with less data.",
"## Usage\nResearchers and AI practitioners can leverage this dataset for experiments in language model training, particularly those focused on the efficiency and efficacy of models trained on structured, high-quality data.",
"### Text Length Distribution\nThe textbooks in this dataset exhibit the following characteristics in terms of text length (measured in characters):\n- Mean: 6,456.23\n- Standard Deviation: 2,559.61\n- 25th Percentile: 4,831\n- Median (50th Percentile): 6,265\n- 75th Percentile: 8,048\n\nThese statistics indicate a varied range of text lengths, providing a comprehensive dataset suitable for diverse applications in language model training.",
"## Contribution\nContributions to the dataset are encouraged and valued. Enhancements can range from adding new textbooks to optimizing existing content for better quality and diversity.",
"## Acknowledgments\nThe development of this dataset was inspired by the groundbreaking work presented in the paper. I acknowledge the contribution of all the community members and the original authors (Microsoft Research) who have influenced this project.",
"### Disclaimer\nWhile every effort has been made to ensure the accuracy of the information contained within this dataset, please note that it is provided 'as is' and without any warranties.\n\nThe use of the data is intended for research purposes only. You are advised to verify any information obtained from this dataset before acting upon it.",
"## Tiny Series\n\nExplore the possibilities and limitations of building Small Language Models with these tiny gems of data!\n\n- TinyStories: The paper that sparked my interest in the journey of the tiny-* series.\n- tiny-codes: Collection of 1.6M short and clear code snippets that can help LLM models learn how to reason.\n- tiny-textbooks: 420k \"things of internet\" synthetic textbooks.\n- tiny-code-textbooks: Collection of 207k code explanation synthetic textbooks.\n- tiny-math-textbooks: Collection of 635k short math textbook on various mathematical topics.\n- tiny-orca-textbooks: Synthetic textbook to help model learn in-context on how it should perform task the right way.\n- tiny-webtext: A 6GB (4.5M records) variety of diverse webtext enriched with critical thinking methods to make unbiased English dataset.\n- tiny-lessons: Subset of tiny-textbooks dataset, various lessons about \"things of internet\" augmented in a bite-sized textbook Markdown format.\n- tiny-bridgedict: A dataset that links and transfers knowledge between English, Vietnamese, Chinese in a tiny multilingual models."
] | [
"TAGS\n#task_categories-text-generation #size_categories-1M<n<10M #language-English #license-apache-2.0 #synthetic #arxiv-2306.11644 #arxiv-2309.05463 #arxiv-2305.07759 #doi-10.57967/hf/1612 #region-us \n",
"# Quirky Textbook Trove: Compact Excellence for Small Language Model\n\nStrange dataset is 100% AI-generated, a compilation aligned with the vision of the Textbooks Are All You Need and Textbooks Are All You Need II: phi-1.5 technical report research. This dataset features 2,7M synthetic textbooks, encapsulating 16GB of raw text data. The unique name reflects its unconventional synthesis methodology, its compact size, deduped, and its emphasis on clear, focused content.\n\nThe dataset comprises text documents, each representing a tiny synthetic textbook. The source of this data is advanced open LLM-generated text, ensuring a high-quality, structured representation across a diverse range of subjects.",
"## Motivation\nThe creation of the dataset is driven by the need for high-quality, efficient training data. By emulating the principles outlined in the paper, this dataset aims to contribute to the development of more efficient language models that can achieve remarkable performance with less data.",
"## Usage\nResearchers and AI practitioners can leverage this dataset for experiments in language model training, particularly those focused on the efficiency and efficacy of models trained on structured, high-quality data.",
"### Text Length Distribution\nThe textbooks in this dataset exhibit the following characteristics in terms of text length (measured in characters):\n- Mean: 6,456.23\n- Standard Deviation: 2,559.61\n- 25th Percentile: 4,831\n- Median (50th Percentile): 6,265\n- 75th Percentile: 8,048\n\nThese statistics indicate a varied range of text lengths, providing a comprehensive dataset suitable for diverse applications in language model training.",
"## Contribution\nContributions to the dataset are encouraged and valued. Enhancements can range from adding new textbooks to optimizing existing content for better quality and diversity.",
"## Acknowledgments\nThe development of this dataset was inspired by the groundbreaking work presented in the paper. I acknowledge the contribution of all the community members and the original authors (Microsoft Research) who have influenced this project.",
"### Disclaimer\nWhile every effort has been made to ensure the accuracy of the information contained within this dataset, please note that it is provided 'as is' and without any warranties.\n\nThe use of the data is intended for research purposes only. You are advised to verify any information obtained from this dataset before acting upon it.",
"## Tiny Series\n\nExplore the possibilities and limitations of building Small Language Models with these tiny gems of data!\n\n- TinyStories: The paper that sparked my interest in the journey of the tiny-* series.\n- tiny-codes: Collection of 1.6M short and clear code snippets that can help LLM models learn how to reason.\n- tiny-textbooks: 420k \"things of internet\" synthetic textbooks.\n- tiny-code-textbooks: Collection of 207k code explanation synthetic textbooks.\n- tiny-math-textbooks: Collection of 635k short math textbook on various mathematical topics.\n- tiny-orca-textbooks: Synthetic textbook to help model learn in-context on how it should perform task the right way.\n- tiny-webtext: A 6GB (4.5M records) variety of diverse webtext enriched with critical thinking methods to make unbiased English dataset.\n- tiny-lessons: Subset of tiny-textbooks dataset, various lessons about \"things of internet\" augmented in a bite-sized textbook Markdown format.\n- tiny-bridgedict: A dataset that links and transfers knowledge between English, Vietnamese, Chinese in a tiny multilingual models."
] |
805f15e5a03238069398bf3596f658d48fd43281 | # Dataset Card for "openhermes_binarized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jan-hq/openhermes_binarized | [
"region:us"
] | 2024-01-15T04:46:54+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 309587583.1440632, "num_examples": 240402}, {"name": "test", "num_bytes": 3128044.855936845, "num_examples": 2429}], "download_size": 158388623, "dataset_size": 312715628.0}} | 2024-01-15T04:48:45+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "openhermes_binarized"
More Information needed | [
"# Dataset Card for \"openhermes_binarized\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"openhermes_binarized\"\n\nMore Information needed"
] |
5abeacf21c552b34c08501b19452eab8ad4cb06e | # Dataset Card for "dolphin_binarized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jan-hq/dolphin_binarized | [
"region:us"
] | 2024-01-15T05:13:04+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1571862982.8863597, "num_examples": 882938}, {"name": "test", "num_bytes": 15878177.113640415, "num_examples": 8919}], "download_size": 856689595, "dataset_size": 1587741160.0}} | 2024-01-15T06:24:10+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "dolphin_binarized"
More Information needed | [
"# Dataset Card for \"dolphin_binarized\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"dolphin_binarized\"\n\nMore Information needed"
] |
d32678464cb927d10725bde31398219db9bb42a2 |
# Dataset of elis (Touhou)
This is the dataset of elis (Touhou), containing 108 images and their tags.
The core tags of this character are `blonde_hair, bow, long_hair, wings, hair_bow, bat_wings, pointy_ears, facial_mark, red_bow, hair_ornament, red_eyes, hair_flower, ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 108 | 90.98 MiB | [Download](https://huggingface.co/datasets/CyberHarem/elis_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 108 | 66.81 MiB | [Download](https://huggingface.co/datasets/CyberHarem/elis_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 208 | 123.64 MiB | [Download](https://huggingface.co/datasets/CyberHarem/elis_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 108 | 85.80 MiB | [Download](https://huggingface.co/datasets/CyberHarem/elis_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 208 | 149.78 MiB | [Download](https://huggingface.co/datasets/CyberHarem/elis_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/elis_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 26 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, star_(symbol), skirt, vest, wand, smile, flower |
| 1 | 8 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, long_sleeves, red_skirt, solo, star_(symbol), white_shirt, looking_at_viewer, open_vest, red_bowtie, smile, black_vest, closed_mouth, flower, simple_background, long_skirt, white_background, bangs, collared_shirt, holding_wand, arms_behind_back, puffy_sleeves |
| 2 | 8 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, red_bowtie, red_skirt, star_(symbol), white_shirt, black_vest, frilled_skirt, full_body, holding_wand, juliet_sleeves, looking_at_viewer, smile, solo, bangs, flower, open_mouth, open_vest, long_skirt, black_footwear, blush, fang, mary_janes, purple_eyes, buttons, chibi, one_eye_closed, puffy_long_sleeves, red_footwear |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | star_(symbol) | skirt | vest | wand | smile | flower | long_sleeves | red_skirt | white_shirt | looking_at_viewer | open_vest | red_bowtie | black_vest | closed_mouth | simple_background | long_skirt | white_background | bangs | collared_shirt | holding_wand | arms_behind_back | puffy_sleeves | frilled_skirt | full_body | juliet_sleeves | open_mouth | black_footwear | blush | fang | mary_janes | purple_eyes | buttons | chibi | one_eye_closed | puffy_long_sleeves | red_footwear |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:----------------|:--------|:-------|:-------|:--------|:---------|:---------------|:------------|:--------------|:--------------------|:------------|:-------------|:-------------|:---------------|:--------------------|:-------------|:-------------------|:--------|:-----------------|:---------------|:-------------------|:----------------|:----------------|:------------|:-----------------|:-------------|:-----------------|:--------|:-------|:-------------|:--------------|:----------|:--------|:-----------------|:---------------------|:---------------|
| 0 | 26 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 8 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | |
| 2 | 8 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | X | | | | X | X | | X | X | X | X | X | X | | | X | | X | | X | | | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/elis_touhou | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T05:19:42+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T05:48:28+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of elis (Touhou)
========================
This is the dataset of elis (Touhou), containing 108 images and their tags.
The core tags of this character are 'blonde\_hair, bow, long\_hair, wings, hair\_bow, bat\_wings, pointy\_ears, facial\_mark, red\_bow, hair\_ornament, red\_eyes, hair\_flower, ribbon', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
9ee635b81bf0dcaf517526d68ab2a41db7c2076d |
# Dataset of sara/サラ (Touhou)
This is the dataset of sara/サラ (Touhou), containing 58 images and their tags.
The core tags of this character are `pink_hair, short_hair, pink_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 58 | 32.62 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sara_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 58 | 26.83 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sara_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 92 | 41.71 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sara_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 58 | 31.45 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sara_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 92 | 46.89 MiB | [Download](https://huggingface.co/datasets/CyberHarem/sara_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/sara_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 33 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, smile, red_dress, looking_at_viewer, one_side_up, short_sleeves, simple_background, bangs, full_body, open_mouth, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | smile | red_dress | looking_at_viewer | one_side_up | short_sleeves | simple_background | bangs | full_body | open_mouth | white_background |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------|:------------|:--------------------|:--------------|:----------------|:--------------------|:--------|:------------|:-------------|:-------------------|
| 0 | 33 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/sara_touhou | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T05:19:45+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T05:47:40+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of sara/サラ (Touhou)
===========================
This is the dataset of sara/サラ (Touhou), containing 58 images and their tags.
The core tags of this character are 'pink\_hair, short\_hair, pink\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
6e43b689582f7e93e8e1667d5fe8c3c51de27096 | # Dataset Card for "oasst2_top1"
* Top 1% conversations of https://huggingface.co/datasets/OpenAssistant/oasst2
* generated using https://github.com/blancsw/deep_4_all/blob/main/datasets/oasst/convert.py | g-ronimo/oasst2_top1 | [
"license:apache-2.0",
"region:us"
] | 2024-01-15T05:54:05+00:00 | {"license": "apache-2.0", "dataset_info": {"features": [{"name": "conversation", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 24247056, "num_examples": 13757}], "download_size": 14029074, "dataset_size": 24247056}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-15T15:38:08+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
| # Dataset Card for "oasst2_top1"
* Top 1% conversations of URL
* generated using URL | [
"# Dataset Card for \"oasst2_top1\"\n\n* Top 1% conversations of URL\n* generated using URL"
] | [
"TAGS\n#license-apache-2.0 #region-us \n",
"# Dataset Card for \"oasst2_top1\"\n\n* Top 1% conversations of URL\n* generated using URL"
] |
0cc818c08d23abbf02e0ac6dbe6fbd9dbb8a92f7 | # Dataset Card for "oasst2_top1_fr-en-de-es-it"
* Top 1% conversations of https://huggingface.co/datasets/OpenAssistant/oasst2
* language-filtered: fr, en, de, es, ita
* generated using https://github.com/blancsw/deep_4_all/blob/main/datasets/oasst/convert.py
| g-ronimo/oasst2_top1_fr-en-de-es-it | [
"license:apache-2.0",
"region:us"
] | 2024-01-15T05:56:21+00:00 | {"license": "apache-2.0", "dataset_info": {"features": [{"name": "conversation", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 18301524, "num_examples": 10746}], "download_size": 10477478, "dataset_size": 18301524}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-15T15:36:54+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
| # Dataset Card for "oasst2_top1_fr-en-de-es-it"
* Top 1% conversations of URL
* language-filtered: fr, en, de, es, ita
* generated using URL
| [
"# Dataset Card for \"oasst2_top1_fr-en-de-es-it\"\n\n* Top 1% conversations of URL\n* language-filtered: fr, en, de, es, ita\n* generated using URL"
] | [
"TAGS\n#license-apache-2.0 #region-us \n",
"# Dataset Card for \"oasst2_top1_fr-en-de-es-it\"\n\n* Top 1% conversations of URL\n* language-filtered: fr, en, de, es, ita\n* generated using URL"
] |
63e24d2125f0704568d4a25adf2a1247bd16f976 |
# Dataset Card for Evaluation run of deepseek-ai/deepseek-moe-16b-base
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [deepseek-ai/deepseek-moe-16b-base](https://huggingface.co/deepseek-ai/deepseek-moe-16b-base) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_deepseek-ai__deepseek-moe-16b-base",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-15T06:33:48.729928](https://huggingface.co/datasets/open-llm-leaderboard/details_deepseek-ai__deepseek-moe-16b-base/blob/main/results_2024-01-15T06-33-48.729928.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.465522984657348,
"acc_stderr": 0.034469796748715614,
"acc_norm": 0.46990944729307677,
"acc_norm_stderr": 0.03523647567293407,
"mc1": 0.23745410036719705,
"mc1_stderr": 0.014896277441041836,
"mc2": 0.3607930335233562,
"mc2_stderr": 0.01354653975819568
},
"harness|arc:challenge|25": {
"acc": 0.49658703071672355,
"acc_stderr": 0.014611050403244077,
"acc_norm": 0.5324232081911263,
"acc_norm_stderr": 0.014580637569995423
},
"harness|hellaswag|10": {
"acc": 0.5957976498705437,
"acc_stderr": 0.004897340793314379,
"acc_norm": 0.7977494523003386,
"acc_norm_stderr": 0.004008571431483689
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.27,
"acc_stderr": 0.044619604333847415,
"acc_norm": 0.27,
"acc_norm_stderr": 0.044619604333847415
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.3925925925925926,
"acc_stderr": 0.04218506215368879,
"acc_norm": 0.3925925925925926,
"acc_norm_stderr": 0.04218506215368879
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.4605263157894737,
"acc_stderr": 0.04056242252249034,
"acc_norm": 0.4605263157894737,
"acc_norm_stderr": 0.04056242252249034
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.4716981132075472,
"acc_stderr": 0.0307235352490061,
"acc_norm": 0.4716981132075472,
"acc_norm_stderr": 0.0307235352490061
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.5347222222222222,
"acc_stderr": 0.04171115858181618,
"acc_norm": 0.5347222222222222,
"acc_norm_stderr": 0.04171115858181618
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.37,
"acc_stderr": 0.048523658709391,
"acc_norm": 0.37,
"acc_norm_stderr": 0.048523658709391
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.3930635838150289,
"acc_stderr": 0.0372424959581773,
"acc_norm": 0.3930635838150289,
"acc_norm_stderr": 0.0372424959581773
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.27450980392156865,
"acc_stderr": 0.04440521906179327,
"acc_norm": 0.27450980392156865,
"acc_norm_stderr": 0.04440521906179327
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.62,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.62,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.37446808510638296,
"acc_stderr": 0.031639106653672915,
"acc_norm": 0.37446808510638296,
"acc_norm_stderr": 0.031639106653672915
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2719298245614035,
"acc_stderr": 0.041857744240220554,
"acc_norm": 0.2719298245614035,
"acc_norm_stderr": 0.041857744240220554
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.4827586206896552,
"acc_stderr": 0.04164188720169377,
"acc_norm": 0.4827586206896552,
"acc_norm_stderr": 0.04164188720169377
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.29365079365079366,
"acc_stderr": 0.023456037383982022,
"acc_norm": 0.29365079365079366,
"acc_norm_stderr": 0.023456037383982022
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.30952380952380953,
"acc_stderr": 0.04134913018303316,
"acc_norm": 0.30952380952380953,
"acc_norm_stderr": 0.04134913018303316
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.4870967741935484,
"acc_stderr": 0.028434533152681855,
"acc_norm": 0.4870967741935484,
"acc_norm_stderr": 0.028434533152681855
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.2955665024630542,
"acc_stderr": 0.032104944337514575,
"acc_norm": 0.2955665024630542,
"acc_norm_stderr": 0.032104944337514575
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.5393939393939394,
"acc_stderr": 0.03892207016552012,
"acc_norm": 0.5393939393939394,
"acc_norm_stderr": 0.03892207016552012
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.5555555555555556,
"acc_stderr": 0.03540294377095367,
"acc_norm": 0.5555555555555556,
"acc_norm_stderr": 0.03540294377095367
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.616580310880829,
"acc_stderr": 0.03508984236295341,
"acc_norm": 0.616580310880829,
"acc_norm_stderr": 0.03508984236295341
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.41025641025641024,
"acc_stderr": 0.02493931390694078,
"acc_norm": 0.41025641025641024,
"acc_norm_stderr": 0.02493931390694078
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2518518518518518,
"acc_stderr": 0.026466117538959912,
"acc_norm": 0.2518518518518518,
"acc_norm_stderr": 0.026466117538959912
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.4327731092436975,
"acc_stderr": 0.03218358107742613,
"acc_norm": 0.4327731092436975,
"acc_norm_stderr": 0.03218358107742613
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2582781456953642,
"acc_stderr": 0.035737053147634576,
"acc_norm": 0.2582781456953642,
"acc_norm_stderr": 0.035737053147634576
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.6293577981651376,
"acc_stderr": 0.02070745816435298,
"acc_norm": 0.6293577981651376,
"acc_norm_stderr": 0.02070745816435298
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.33796296296296297,
"acc_stderr": 0.03225941352631295,
"acc_norm": 0.33796296296296297,
"acc_norm_stderr": 0.03225941352631295
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.5441176470588235,
"acc_stderr": 0.03495624522015478,
"acc_norm": 0.5441176470588235,
"acc_norm_stderr": 0.03495624522015478
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.6118143459915611,
"acc_stderr": 0.031722950043323296,
"acc_norm": 0.6118143459915611,
"acc_norm_stderr": 0.031722950043323296
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.5515695067264574,
"acc_stderr": 0.03337883736255098,
"acc_norm": 0.5515695067264574,
"acc_norm_stderr": 0.03337883736255098
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.5801526717557252,
"acc_stderr": 0.043285772152629715,
"acc_norm": 0.5801526717557252,
"acc_norm_stderr": 0.043285772152629715
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.5537190082644629,
"acc_stderr": 0.0453793517794788,
"acc_norm": 0.5537190082644629,
"acc_norm_stderr": 0.0453793517794788
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.5092592592592593,
"acc_stderr": 0.04832853553437056,
"acc_norm": 0.5092592592592593,
"acc_norm_stderr": 0.04832853553437056
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.5337423312883436,
"acc_stderr": 0.03919415545048411,
"acc_norm": 0.5337423312883436,
"acc_norm_stderr": 0.03919415545048411
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.375,
"acc_stderr": 0.04595091388086298,
"acc_norm": 0.375,
"acc_norm_stderr": 0.04595091388086298
},
"harness|hendrycksTest-management|5": {
"acc": 0.6407766990291263,
"acc_stderr": 0.04750458399041694,
"acc_norm": 0.6407766990291263,
"acc_norm_stderr": 0.04750458399041694
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.7435897435897436,
"acc_stderr": 0.02860595370200425,
"acc_norm": 0.7435897435897436,
"acc_norm_stderr": 0.02860595370200425
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956911,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956911
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.6436781609195402,
"acc_stderr": 0.0171258537627559,
"acc_norm": 0.6436781609195402,
"acc_norm_stderr": 0.0171258537627559
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.47398843930635837,
"acc_stderr": 0.02688264343402289,
"acc_norm": 0.47398843930635837,
"acc_norm_stderr": 0.02688264343402289
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2424581005586592,
"acc_stderr": 0.014333522059217889,
"acc_norm": 0.2424581005586592,
"acc_norm_stderr": 0.014333522059217889
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.5522875816993464,
"acc_stderr": 0.02847293847803353,
"acc_norm": 0.5522875816993464,
"acc_norm_stderr": 0.02847293847803353
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.5241157556270096,
"acc_stderr": 0.028365041542564577,
"acc_norm": 0.5241157556270096,
"acc_norm_stderr": 0.028365041542564577
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.5030864197530864,
"acc_stderr": 0.02782021415859437,
"acc_norm": 0.5030864197530864,
"acc_norm_stderr": 0.02782021415859437
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.32269503546099293,
"acc_stderr": 0.027889139300534785,
"acc_norm": 0.32269503546099293,
"acc_norm_stderr": 0.027889139300534785
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.3494132985658409,
"acc_stderr": 0.012177306252786698,
"acc_norm": 0.3494132985658409,
"acc_norm_stderr": 0.012177306252786698
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.3897058823529412,
"acc_stderr": 0.029624663581159703,
"acc_norm": 0.3897058823529412,
"acc_norm_stderr": 0.029624663581159703
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.44281045751633985,
"acc_stderr": 0.020095083154577347,
"acc_norm": 0.44281045751633985,
"acc_norm_stderr": 0.020095083154577347
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.509090909090909,
"acc_stderr": 0.04788339768702861,
"acc_norm": 0.509090909090909,
"acc_norm_stderr": 0.04788339768702861
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.5265306122448979,
"acc_stderr": 0.03196412734523272,
"acc_norm": 0.5265306122448979,
"acc_norm_stderr": 0.03196412734523272
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.6567164179104478,
"acc_stderr": 0.03357379665433431,
"acc_norm": 0.6567164179104478,
"acc_norm_stderr": 0.03357379665433431
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-virology|5": {
"acc": 0.42168674698795183,
"acc_stderr": 0.03844453181770917,
"acc_norm": 0.42168674698795183,
"acc_norm_stderr": 0.03844453181770917
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.6783625730994152,
"acc_stderr": 0.03582529442573122,
"acc_norm": 0.6783625730994152,
"acc_norm_stderr": 0.03582529442573122
},
"harness|truthfulqa:mc|0": {
"mc1": 0.23745410036719705,
"mc1_stderr": 0.014896277441041836,
"mc2": 0.3607930335233562,
"mc2_stderr": 0.01354653975819568
},
"harness|winogrande|5": {
"acc": 0.7371744277821626,
"acc_stderr": 0.012370922527262006
},
"harness|gsm8k|5": {
"acc": 0.1728582259287339,
"acc_stderr": 0.01041543224620057
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_deepseek-ai__deepseek-moe-16b-base | [
"region:us"
] | 2024-01-15T06:35:55+00:00 | {"pretty_name": "Evaluation run of deepseek-ai/deepseek-moe-16b-base", "dataset_summary": "Dataset automatically created during the evaluation run of model [deepseek-ai/deepseek-moe-16b-base](https://huggingface.co/deepseek-ai/deepseek-moe-16b-base) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_deepseek-ai__deepseek-moe-16b-base\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-15T06:33:48.729928](https://huggingface.co/datasets/open-llm-leaderboard/details_deepseek-ai__deepseek-moe-16b-base/blob/main/results_2024-01-15T06-33-48.729928.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.465522984657348,\n \"acc_stderr\": 0.034469796748715614,\n \"acc_norm\": 0.46990944729307677,\n \"acc_norm_stderr\": 0.03523647567293407,\n \"mc1\": 0.23745410036719705,\n \"mc1_stderr\": 0.014896277441041836,\n \"mc2\": 0.3607930335233562,\n \"mc2_stderr\": 0.01354653975819568\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.49658703071672355,\n \"acc_stderr\": 0.014611050403244077,\n \"acc_norm\": 0.5324232081911263,\n \"acc_norm_stderr\": 0.014580637569995423\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5957976498705437,\n \"acc_stderr\": 0.004897340793314379,\n \"acc_norm\": 0.7977494523003386,\n \"acc_norm_stderr\": 0.004008571431483689\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.044619604333847415,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.044619604333847415\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.3925925925925926,\n \"acc_stderr\": 0.04218506215368879,\n \"acc_norm\": 0.3925925925925926,\n \"acc_norm_stderr\": 0.04218506215368879\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.4605263157894737,\n \"acc_stderr\": 0.04056242252249034,\n \"acc_norm\": 0.4605263157894737,\n \"acc_norm_stderr\": 0.04056242252249034\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.4716981132075472,\n \"acc_stderr\": 0.0307235352490061,\n \"acc_norm\": 0.4716981132075472,\n \"acc_norm_stderr\": 0.0307235352490061\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5347222222222222,\n \"acc_stderr\": 0.04171115858181618,\n \"acc_norm\": 0.5347222222222222,\n \"acc_norm_stderr\": 0.04171115858181618\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.048523658709391,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.048523658709391\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.3930635838150289,\n \"acc_stderr\": 0.0372424959581773,\n \"acc_norm\": 0.3930635838150289,\n \"acc_norm_stderr\": 0.0372424959581773\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.27450980392156865,\n \"acc_stderr\": 0.04440521906179327,\n \"acc_norm\": 0.27450980392156865,\n \"acc_norm_stderr\": 0.04440521906179327\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.37446808510638296,\n \"acc_stderr\": 0.031639106653672915,\n \"acc_norm\": 0.37446808510638296,\n \"acc_norm_stderr\": 0.031639106653672915\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2719298245614035,\n \"acc_stderr\": 0.041857744240220554,\n \"acc_norm\": 0.2719298245614035,\n \"acc_norm_stderr\": 0.041857744240220554\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.4827586206896552,\n \"acc_stderr\": 0.04164188720169377,\n \"acc_norm\": 0.4827586206896552,\n \"acc_norm_stderr\": 0.04164188720169377\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.29365079365079366,\n \"acc_stderr\": 0.023456037383982022,\n \"acc_norm\": 0.29365079365079366,\n \"acc_norm_stderr\": 0.023456037383982022\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.30952380952380953,\n \"acc_stderr\": 0.04134913018303316,\n \"acc_norm\": 0.30952380952380953,\n \"acc_norm_stderr\": 0.04134913018303316\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.4870967741935484,\n \"acc_stderr\": 0.028434533152681855,\n \"acc_norm\": 0.4870967741935484,\n \"acc_norm_stderr\": 0.028434533152681855\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.2955665024630542,\n \"acc_stderr\": 0.032104944337514575,\n \"acc_norm\": 0.2955665024630542,\n \"acc_norm_stderr\": 0.032104944337514575\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.5393939393939394,\n \"acc_stderr\": 0.03892207016552012,\n \"acc_norm\": 0.5393939393939394,\n \"acc_norm_stderr\": 0.03892207016552012\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.5555555555555556,\n \"acc_stderr\": 0.03540294377095367,\n \"acc_norm\": 0.5555555555555556,\n \"acc_norm_stderr\": 0.03540294377095367\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.616580310880829,\n \"acc_stderr\": 0.03508984236295341,\n \"acc_norm\": 0.616580310880829,\n \"acc_norm_stderr\": 0.03508984236295341\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.41025641025641024,\n \"acc_stderr\": 0.02493931390694078,\n \"acc_norm\": 0.41025641025641024,\n \"acc_norm_stderr\": 0.02493931390694078\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.2518518518518518,\n \"acc_stderr\": 0.026466117538959912,\n \"acc_norm\": 0.2518518518518518,\n \"acc_norm_stderr\": 0.026466117538959912\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.4327731092436975,\n \"acc_stderr\": 0.03218358107742613,\n \"acc_norm\": 0.4327731092436975,\n \"acc_norm_stderr\": 0.03218358107742613\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.2582781456953642,\n \"acc_stderr\": 0.035737053147634576,\n \"acc_norm\": 0.2582781456953642,\n \"acc_norm_stderr\": 0.035737053147634576\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.6293577981651376,\n \"acc_stderr\": 0.02070745816435298,\n \"acc_norm\": 0.6293577981651376,\n \"acc_norm_stderr\": 0.02070745816435298\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.33796296296296297,\n \"acc_stderr\": 0.03225941352631295,\n \"acc_norm\": 0.33796296296296297,\n \"acc_norm_stderr\": 0.03225941352631295\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.5441176470588235,\n \"acc_stderr\": 0.03495624522015478,\n \"acc_norm\": 0.5441176470588235,\n \"acc_norm_stderr\": 0.03495624522015478\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.6118143459915611,\n \"acc_stderr\": 0.031722950043323296,\n \"acc_norm\": 0.6118143459915611,\n \"acc_norm_stderr\": 0.031722950043323296\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.5515695067264574,\n \"acc_stderr\": 0.03337883736255098,\n \"acc_norm\": 0.5515695067264574,\n \"acc_norm_stderr\": 0.03337883736255098\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.5801526717557252,\n \"acc_stderr\": 0.043285772152629715,\n \"acc_norm\": 0.5801526717557252,\n \"acc_norm_stderr\": 0.043285772152629715\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.5537190082644629,\n \"acc_stderr\": 0.0453793517794788,\n \"acc_norm\": 0.5537190082644629,\n \"acc_norm_stderr\": 0.0453793517794788\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.5092592592592593,\n \"acc_stderr\": 0.04832853553437056,\n \"acc_norm\": 0.5092592592592593,\n \"acc_norm_stderr\": 0.04832853553437056\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.5337423312883436,\n \"acc_stderr\": 0.03919415545048411,\n \"acc_norm\": 0.5337423312883436,\n \"acc_norm_stderr\": 0.03919415545048411\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.375,\n \"acc_stderr\": 0.04595091388086298,\n \"acc_norm\": 0.375,\n \"acc_norm_stderr\": 0.04595091388086298\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.6407766990291263,\n \"acc_stderr\": 0.04750458399041694,\n \"acc_norm\": 0.6407766990291263,\n \"acc_norm_stderr\": 0.04750458399041694\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.7435897435897436,\n \"acc_stderr\": 0.02860595370200425,\n \"acc_norm\": 0.7435897435897436,\n \"acc_norm_stderr\": 0.02860595370200425\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956911,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956911\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.6436781609195402,\n \"acc_stderr\": 0.0171258537627559,\n \"acc_norm\": 0.6436781609195402,\n \"acc_norm_stderr\": 0.0171258537627559\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.47398843930635837,\n \"acc_stderr\": 0.02688264343402289,\n \"acc_norm\": 0.47398843930635837,\n \"acc_norm_stderr\": 0.02688264343402289\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2424581005586592,\n \"acc_stderr\": 0.014333522059217889,\n \"acc_norm\": 0.2424581005586592,\n \"acc_norm_stderr\": 0.014333522059217889\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.5522875816993464,\n \"acc_stderr\": 0.02847293847803353,\n \"acc_norm\": 0.5522875816993464,\n \"acc_norm_stderr\": 0.02847293847803353\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.5241157556270096,\n \"acc_stderr\": 0.028365041542564577,\n \"acc_norm\": 0.5241157556270096,\n \"acc_norm_stderr\": 0.028365041542564577\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.5030864197530864,\n \"acc_stderr\": 0.02782021415859437,\n \"acc_norm\": 0.5030864197530864,\n \"acc_norm_stderr\": 0.02782021415859437\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.32269503546099293,\n \"acc_stderr\": 0.027889139300534785,\n \"acc_norm\": 0.32269503546099293,\n \"acc_norm_stderr\": 0.027889139300534785\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3494132985658409,\n \"acc_stderr\": 0.012177306252786698,\n \"acc_norm\": 0.3494132985658409,\n \"acc_norm_stderr\": 0.012177306252786698\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.3897058823529412,\n \"acc_stderr\": 0.029624663581159703,\n \"acc_norm\": 0.3897058823529412,\n \"acc_norm_stderr\": 0.029624663581159703\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.44281045751633985,\n \"acc_stderr\": 0.020095083154577347,\n \"acc_norm\": 0.44281045751633985,\n \"acc_norm_stderr\": 0.020095083154577347\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.509090909090909,\n \"acc_stderr\": 0.04788339768702861,\n \"acc_norm\": 0.509090909090909,\n \"acc_norm_stderr\": 0.04788339768702861\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.5265306122448979,\n \"acc_stderr\": 0.03196412734523272,\n \"acc_norm\": 0.5265306122448979,\n \"acc_norm_stderr\": 0.03196412734523272\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6567164179104478,\n \"acc_stderr\": 0.03357379665433431,\n \"acc_norm\": 0.6567164179104478,\n \"acc_norm_stderr\": 0.03357379665433431\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.42168674698795183,\n \"acc_stderr\": 0.03844453181770917,\n \"acc_norm\": 0.42168674698795183,\n \"acc_norm_stderr\": 0.03844453181770917\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.6783625730994152,\n \"acc_stderr\": 0.03582529442573122,\n \"acc_norm\": 0.6783625730994152,\n \"acc_norm_stderr\": 0.03582529442573122\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.23745410036719705,\n \"mc1_stderr\": 0.014896277441041836,\n \"mc2\": 0.3607930335233562,\n \"mc2_stderr\": 0.01354653975819568\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7371744277821626,\n \"acc_stderr\": 0.012370922527262006\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.1728582259287339,\n \"acc_stderr\": 0.01041543224620057\n }\n}\n```", "repo_url": "https://huggingface.co/deepseek-ai/deepseek-moe-16b-base", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|arc:challenge|25_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|gsm8k|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hellaswag|10_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-15T06-33-48.729928.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["**/details_harness|winogrande|5_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-15T06-33-48.729928.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_15T06_33_48.729928", "path": ["results_2024-01-15T06-33-48.729928.parquet"]}, {"split": "latest", "path": ["results_2024-01-15T06-33-48.729928.parquet"]}]}]} | 2024-01-15T06:36:22+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of deepseek-ai/deepseek-moe-16b-base
Dataset automatically created during the evaluation run of model deepseek-ai/deepseek-moe-16b-base on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-15T06:33:48.729928(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of deepseek-ai/deepseek-moe-16b-base\n\n\n\nDataset automatically created during the evaluation run of model deepseek-ai/deepseek-moe-16b-base on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T06:33:48.729928(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of deepseek-ai/deepseek-moe-16b-base\n\n\n\nDataset automatically created during the evaluation run of model deepseek-ai/deepseek-moe-16b-base on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-15T06:33:48.729928(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
ac07080f11c15b38c1a5cdc02b0be856bd3abb80 | [
{
"id": "0",
"translation": {
"es": "8a - 4b + 16c + 12d",
"pt": "Para factorizar la expresión (8a - 4b + 16c + 12d), primero agrupemos los términos de manera adecuada. La expresión se puede reorganizar en dos grupos: (8a - 4b) + (16c + 12d). Ahora, en cada grupo, factorizamos los términos comunes: Grupo 1: Factor común de (4) en (8a - 4b): 4(2a - b) . Grupo 2: Factor común de (4) en (16c + 12d): 4(4c + 3d). Finalmente, podemos escribir la expresión factorizada como la suma de los dos grupos factorizados: 4(2a - b) + 4(4c + 3d) "
}
},
{
"id": "1",
"translation": {
"es": "7x^2 + 11x^3 - 4x^5 + 3x^4 - x^8",
"pt": "Para factorizar la expresión (7x^2 + 11x^3 - 4x^5 + 3x^4 - x^8), primero ordenemos los términos en orden descendente según las potencias de (x): -x^8 - 4x^5 + 3x^4 + 11x^3 + 7x^2. Ahora, identifiquemos el factor común. En este caso, el factor común es (x^2). Factorizamos (x^2) de cada término: x^2(-x^6 - 4x^3 + 3x^2 + 11x + 7). Entonces, la expresión factorizada es (x^2(-x^6 - 4x^3 + 3x^2 + 11x + 7))."
}
},
{
"id": "2",
"translation": {
"es": "9x^3 - 6x^2 + 12x^5 - 18x^7",
"pt": "Para factorizar la expresión (9x^3 - 6x^2 + 12x^5 - 18x^7), primero ordenemos los términos en orden descendente según las potencias de (x): -18x^7 + 12x^5 + 9x^3 - 6x^2. Ahora, identifiquemos el factor común. En este caso, el factor común es (3x^2). Factorizamos (3x^2) de cada término: 3x^2(-6x^5 + 4x^3 + 3x - 2). Entonces, la expresión factorizada es (3x^2(-6x^5 + 4x^3 + 3x - 2))."
}
},
{
"id": "3",
"translation": {
"es": "5x+5y",
"pt": "La expresión (5x + 5y) ya está factorizada en términos de primer grado. Sin embargo, si deseas factorizar el término común, puedes hacerlo de la siguiente manera: 5x + 5y. Factor común: (5). 5(x + y). Entonces, la expresión factorizada es (5(x + y))."
}
},
{
"id": "4",
"translation": {
"es": "-6a-9b-3c",
"pt": "El proceso de factorización de la expresión (-6a - 9b - 3c): Agrupamos los términos: (-6a - 9b) - 3c. En el primer grupo, factorizamos el común factor (-3) de (-6a - 9b): -3(2a + 3b) - 3c. Ahora, podemos factorizar el común factor (-3) del primer grupo: -3(2a + 3b + c). Por lo tanto, la expresión (-6a - 9b - 3c) factoriza como (-3(2a + 3b + c))."
}
},
{
"id": "5",
"translation": {
"es": "x^2 + 2x",
"pt": "Para factorizar la expresión (x^2 + 2x). Vamos a realizar el proceso de factorización paso a paso. Factor común: Primero, observamos si hay algún factor común en ambos términos. En este caso, (x) es un factor común. x(x + 2). Verificación: Podemos verificar si hemos factorizado correctamente multiplicando los factores para asegurarnos de que obtenemos la expresión original. x(x + 2) = x^2 + 2x. La expresión original es igual a la factorización, por lo que hemos factorizado correctamente. Entonces, la factorización de (x^2 + 2x) es (x(x + 2))."
}
},
{
"id": "6",
"translation": {
"es": "4x^2 - 12x",
"pt": "Para factorizar la expresión (4x^2 - 12x), primero identificamos el factor común. En este caso, el factor común es (4x). Ahora, factorizamos (4x) de cada término: 4x(x - 3). Entonces, la expresión factorizada es (4x(x - 3))."
}
},
{
"id": "7",
"translation": {
"es": "2x^2 + 6xy + 4y^2",
"pt": "Para factorizar la expresión (2x^2 + 6xy + 4y^2), primero identificamos el factor común. En este caso, el factor común es (2). Ahora, factorizamos (2) de cada término: 2(x^2 + 3xy + 2y^2). Luego, observamos que (x^2 + 3xy + 2y^2) es una expresión cuadrática perfecta llamada cuadrado de un binomio. Por lo tanto, factorizamos esta expresión como sigue: (x + y)^2. Finalmente, la expresión factorizada es (2(x + y)^2)."
}
},
{
"id": "8",
"translation": {
"es": "x^2 - 9",
"pt": "Para factorizar la expresión (x^2 - 9), primero observamos que es una diferencia de cuadrados. Por lo tanto, podemos factorizarla como sigue: x^2 - 9 = (x + 3)(x - 3). Aquí, (x + 3) y (x - 3) son los dos factores binomiales que, al multiplicarlos, nos dan la expresión original."
}
},
{
"id": "9",
"translation": {
"es": "4x^2 - 25",
"pt": "Para factorizar la expresión (4x^2 - 25), primero observamos que es una diferencia de cuadrados. Por lo tanto, podemos factorizarla como sigue: 4x^2 - 25 = (2x + 5)(2x - 5). Aquí, (2x + 5) y (2x - 5) son los dos factores binomiales que, al multiplicarlos, nos dan la expresión original."
}
},
{
"id": "10",
"translation": {
"es": "x^3 - 8",
"pt": "Para factorizar la expresión (x^3 - 8), primero observamos que es un cubo perfecto. Por lo tanto, podemos factorizarla como sigue: x^3 - 8 = (x - 2)(x^2 + 2x + 4). Aquí, (x - 2) es el factor binomial y (x^2 + 2x + 4) es el factor trinomial que, al multiplicarlos, nos dan la expresión original."
}
},
{
"id": "11",
"translation": {
"es": "27x^3 + 8",
"pt": "Para factorizar la expresión (27x^3 + 8), primero observamos que es una suma de cubos. Por lo tanto, podemos factorizarla como sigue: 27x^3 + 8 = (3x + 2)(9x^2 - 6x + 4). Aquí, (3x + 2) es el factor binomial y (9x^2 - 6x + 4) es el factor trinomial que, al multiplicarlos, nos dan la expresión original."
}
},
{
"id": "12",
"translation": {
"es": "8x^2 - 6x",
"pt": "Para factorizar la expresión (8x^2 - 6x), primero observamos que el coeficiente del primer término es 8 y el coeficiente del segundo término es -6. Buscando dos números que multiplicados den 8 y sumados den -6, encontramos que los números son 8 y -1. Por lo tanto, podemos factorizar la expresión de la siguiente manera: 8x^2 - 6x = 8x^2 - 8x + 2x - 6. Ahora, podemos agrupar los términos: (8x^2 - 8x) + (2x - 6). Factoremos cada grupo: 8x(x - 1) + 2(x - 3). (x - 1) es un factor común en ambos términos, por lo tanto, factorizamos (x - 1) de ambos términos: (x - 1)(8x + 2). Finalmente, la expresión factorizada es (x - 1)(8x + 2)."
}
},
{
"id": "13",
"translation": {
"es": "12x^2 - 15x - 18",
"pt": "Para factorizar la expresión (12x^2 - 15x - 18), primero ordenamos los términos en orden descendente según las potencias de (x): 12x^2 - 15x - 18. Ahora, buscamos dos números que multiplicados den -18 y sumados den -15. Los números que cumplen estas condiciones son 9 y -2. Por lo tanto, podemos factorizar la expresión de la siguiente manera: 12x^2 - 15x - 18 = 12x^2 + 9x - 24x - 18. Ahora, podemos agrupar los términos: (12x^2 + 9x) - (24x + 18). Factoremos cada grupo: 3x(4x + 3) - 6(4x + 3). (4x + 3) es un factor común en ambos términos, por lo tanto, factorizamos (4x + 3) de ambos términos: (4x + 3)(3x - 6). Finalmente, la expresión factorizada es (4x + 3)(3x - 6)."
}
},
{
"id": "14",
"translation": {
"es": "x^3 - 2x^2 - 4x + 8",
"pt": "Para factorizar la expresión (x^3 - 2x^2 - 4x + 8), primero agrupamos los términos de manera adecuada. La expresión se puede reorganizar en dos grupos: (x^3 - 2x^2) - (4x - 8). Ahora, en cada grupo, factorizamos los términos comunes: Grupo 1: Factor común de (x^2) en (x^3 - 2x^2): x^2(x - 2). Grupo 2: Factor común de (4) en (4x - 8): 4(x - 2). Finalmente, podemos escribir la expresión factorizada como la suma de los dos grupos factorizados: x^2(x - 2) - 4(x - 2)."
}
},
{
"id": "15",
"translation": {
"es": "4x + 8y - 12z",
"pt": "Para factorizar la expresión (4x + 8y - 12z), primero debemos encontrar el factor común de los términos. En este caso, el factor común es (4). Luego, factorizamos el término común de cada uno de los términos: 4(x + 2y - 3z). Por lo tanto, la expresión factorizada es (4(x + 2y - 3z))."
}
},
{
"id": "16",
"translation": {
"es": "10a - 15b + 20c",
"pt": "Para factorizar la expresión (10a - 15b + 20c), primero debemos encontrar el factor común de los términos. En este caso, el factor común es (5). Luego, factorizamos el término común de cada uno de los términos: 5(2a - 3b + 4c). Por lo tanto, la expresión factorizada es (5(2a - 3b + 4c))."
}
},
{
"id": "17",
"translation": {
"es": "12x^2 + 18x^3 - 24x^4",
"pt": "Para factorizar la expresión (12x^2 + 18x^3 - 24x^4), primero debemos encontrar el factor común de los términos. En este caso, el factor común es (6x^2). Luego, factorizamos el término común de cada uno de los términos: 6x^2(2 + 3x - 4x^2). Por lo tanto, la expresión factorizada es (6x^2(2 + 3x - 4x^2))."
}
},
{
"id": "18",
"translation": {
"es": "8a^3 - 12a^2 + 16a",
"pt": "Para factorizar la expresión (8a^3 - 12a^2 + 16a), primero debemos encontrar el factor común de los términos. En este caso, el factor común es (4a). Luego, factorizamos el término común de cada uno de los términos: 4a(2a^2 - 3a + 4). Por lo tanto, la expresión factorizada es (4a(2a^2 - 3a + 4))."
}
},
{
"id": "19",
"translation": {
"es": "10x^2 - 15x",
"pt": "Para factorizar la expresión (10x^2 - 15x), primero identifiquemos el factor común: 5x. Factorizamos 5x de cada término: 5x(2x - 3). Entonces, la expresión factorizada es (5x(2x - 3))."
}
},
{
"id": "20",
"translation": {
"es": "8y^3 + 12y^2 - 4y",
"pt": "Para factorizar la expresión (8y^3 + 12y^2 - 4y), primero identifiquemos el factor común: 4y. Factorizamos 4y de cada término: 4y(2y^2 + 3y - 1). Entonces, la expresión factorizada es (4y(2y^2 + 3y - 1))."
}
},
{
"id": "21",
"translation": {
"es": "14a^3 - 21a^2 + 7a",
"pt": "Para factorizar la expresión (14a^3 - 21a^2 + 7a), primero identifiquemos el factor común: 7a. Factorizamos 7a de cada término: 7a(2a^2 - 3a + 1). Entonces, la expresión factorizada es (7a(2a^2 - 3a + 1))."
}
},
{
"id": "22",
"translation": {
"es": "9x^2 + 12xy + 4y^2",
"pt": "Para factorizar la expresión (9x^2 + 12xy + 4y^2), primero ordenemos los términos de manera adecuada. La expresión se puede reorganizar en tres grupos: (9x^2 + 12xy) + (4y^2). Ahora, en cada grupo, factorizamos los términos comunes: Grupo 1: Factor común de (3x) en (9x^2 + 12xy): 3x(3x + 4y). Grupo 2: Factor común de (4) en (4y^2): 4y^2. Finalmente, podemos escribir la expresión factorizada como la suma de los dos grupos factorizados: 3x(3x + 4y) + 4y^2."
}
},
{
"id": "23",
"translation": {
"es": "3(x^2 + 2x + 1) - 4(2x^2 - 3x + 5)",
"pt": "Para factorizar esta expresión, empezaremos extrayendo el factor común más grande de cada término. En este caso, el factor común es (x). 3(x^2 + 2x + 1) - 4(2x^2 - 3x + 5). Factorizando el factor común, obtenemos: 3(x(x + 2 + 1)) - 4(2x(x - 3/2 + 5/2)). Expandiendo los términos, tenemos: 3(x(x + 3)) - 4(2x(x + 11/2)). Ahora, podemos simplificar la expresión combinando los términos semejantes: 3x(x + 3) - 8x(x + 11/2). Finalmente, la expresión factorizada es: 3x(x + 3) - 8x(x + 11/2)."
}
},
{
"id": "24",
"translation": {
"es": "7x^2y^3 + 14x^3y^4 - 21xy^5",
"pt": "Para factorizar esta expresión, primero buscamos el factor común más grande de todos los términos. En este caso, el factor común es (xy^2). 7x^2y^3 + 14x^3y^4 - 21xy^5. Factorizando el factor común, obtenemos: xy^2(7xy + 14x^2y^2 - 21y^3). Ahora, podemos factorizar aún más el término dentro del paréntesis extrayendo el factor común (7y). xy^2(7xy + 14x^2y^2 - 21y^3). Factoreando el factor común, tenemos: xy^2(7y(x + 2xy^2 - 3y^2)). Por último, podemos simplificar la expresión combinando los términos semejantes: xy^2(7y(x - 3y^2 + 2xy^2)). Finalmente, la expresión factorizada es: xy^2(7y(x - 3y^2 + 2xy^2))."
}
},
{
"id": "25",
"translation": {
"es": "-2x^2 + 8x + 6",
"pt": "Para factorizar esta expresión, empezaremos extrayendo el factor común más grande de cada término. En este caso, el factor común es (2). -2x^2 + 8x + 6. Factorizando el factor común, obtenemos: 2(-x^2 + 4x + 3). Ahora, podemos factorizar aún más el término dentro del paréntesis extrayendo el factor común (x - 1). 2(-(x^2 - 4x - 3)). Por último, podemos simplificar la expresión combinando los términos semejantes: 2(-(x - 1)(x - 3)). Finalmente, la expresión factorizada es: 2(x - 1)(x - 3)."
}
},
{
"id": "26",
"translation": {
"es": "3x^3 - 9x^2 - 12x",
"pt": "Para factorizar esta expresión, empezaremos extrayendo el factor común más grande de cada término. En este caso, el factor común es (3x). 3x^3 - 9x^2 - 12x. Factorizando el factor común, obtenemos: 3x(x^2 - 3x - 4). Ahora, podemos factorizar aún más el término dentro del paréntesis extrayendo el factor común (x + 1). 3x((x^2 - 3x - 4)). Por último, podemos simplificar la expresión combinando los términos semejantes: 3x(x + 1)(x - 4). Finalmente, la expresión factorizada es: 3x(x + 1)(x - 4)."
}
},
{
"id": "27",
"translation": {
"es": "2x^2 + 3xy - 5y^2",
"pt": "Para factorizar esta expresión, empezaremos extrayendo el factor común más grande de cada término. En este caso, no hay un factor común obvio. Por lo tanto, podemos intentar factorizar por agrupación. 2x^2 + 3xy - 5y^2. Agrupando los dos primeros términos y los dos últimos términos, obtenemos: (2x^2 + 3xy) - (5y^2). Ahora, podemos factorizar cada grupo. 2x^2 + 3xy = x(2x + 3y). 5y^2 = 5y(y). Finalmente, la expresión factorizada es: x(2x + 3y) - 5y(y)."
}
},
{
"id": "28",
"translation": {
"es": "3x^2 - 9x + 6",
"pt": "Para factorizar la expresión (3x^2 - 9x + 6), primero verificamos si se puede extraer un factor común. En este caso, el factor común es (3): 3(x^2 - 3x + 2). Luego, factorizamos el polinomio cuadrático (x^2 - 3x + 2): (x - 1)(x - 2). Por lo tanto, la expresión factorizada es 3(x - 1)(x - 2)."
}
},
{
"id": "29",
"translation": {
"es": "2a^2 - 8a + 6",
"pt": "Para factorizar la expresión (2a^2 - 8a + 6), primero verificamos si se puede extraer un factor común. En este caso, el factor común es (2): 2(a^2 - 4a + 3). Luego, factorizamos el polinomio cuadrático (a^2 - 4a + 3): (a - 1)(a - 3). Por lo tanto, la expresión factorizada es 2(a - 1)(a - 3)."
}
},
{
"id": "30",
"translation": {
"es": "4x^2 + 12x + 9",
"pt": "Para factorizar la expresión (4x^2 + 12x + 9), primero verificamos si se puede extraer un factor común. En este caso, el factor común es (1): 1(4x^2 + 12x + 9). Luego, factorizamos el polinomio cuadrático (4x^2 + 12x + 9): (2x + 3)(2x + 3). Por lo tanto, la expresión factorizada es (2x + 3)^2."
}
},
{
"id": "31",
"translation": {
"es": "x^2 - 5x - 14",
"pt": "Para factorizar la expresión (x^2 - 5x - 14), primero verificamos si se puede extraer un factor común. En este caso, no hay un factor común. Por lo tanto, factorizamos el polinomio cuadrático (x^2 - 5x - 14): (x - 7)(x + 2). Por lo tanto, la expresión factorizada es (x - 7)(x + 2)."
}
},
{
"id": "32",
"translation": {
"es": "2x^2 + 5x + 3",
"pt": "Para factorizar la expresión (2x^2 + 5x + 3), primero verificamos si se puede extraer un factor común. En este caso, no hay un factor común. Por lo tanto, factorizamos el polinomio cuadrático (2x^2 + 5x + 3): (2x + 3)(x + 1). Por lo tanto, la expresión factorizada es (2x + 3)(x + 1)."
}
},
{
"id": "33",
"translation": {
"es": "3x^2 - 4x - 7",
"pt": "Para factorizar la expresión (3x^2 - 4x - 7), primero verificamos si se puede extraer un factor común. En este caso, no hay un factor común. Por lo tanto, factorizamos el polinomio cuadrático (3x^2 - 4x - 7): (3x + 1)(x - 7). Por lo tanto, la expresión factorizada es (3x + 1)(x - 7)."
}
},
{
"id": "34",
"translation": {
"es": "4x^2 + 9x + 5",
"pt": "Para factorizar la expresión (4x^2 + 9x + 5), primero verificamos si se puede extraer un factor común. En este caso, no hay un factor común. Por lo tanto, factorizamos el polinomio cuadrático (4x^2 + 9x + 5): (4x + 5)(x + 1). Por lo tanto, la expresión factorizada es (4x + 5)(x + 1)."
}
},
{
"id": "35",
"translation": {
"es": "3x^2 - 12x + 12",
"pt": "Para factorizar la expresión (3x^2 - 12x + 12), primero sacamos el factor común (3) de todos los términos: 3(x^2 - 4x + 4). Ahora, reconocemos que la expresión dentro del paréntesis es un cuadrado perfecto: (x - 2)^2. Entonces, la expresión factorizada es 3(x - 2)^2."
}
},
{
"id": "36",
"translation": {
"es": "2x(x - 1) + 3(x - 1)",
"pt": "Para factorizar la expresión (2x(x - 1) + 3(x - 1)), primero identificamos el factor común en ambos términos: (x - 1). Factorizamos (x - 1) de la expresión: (x - 1)(2x + 3). Por lo tanto, la expresión factorizada es ((x - 1)(2x + 3))."
}
},
{
"id": "37",
"translation": {
"es": "p^2 + 4pq + 4q^2",
"pt": "Para factorizar la expresión (p^2 + 4pq + 4q^2), primero identifiquemos el factor común. En este caso, el factor común es (p + 2q). Factorizamos (p + 2q) de la expresión: (p + 2q)(p + 2q). Por lo tanto, la expresión factorizada es ((p + 2q)(p + 2q))."
}
},
{
"id": "38",
"translation": {
"es": "p^2 + 4pq + 4q^2",
"pt": "Para factorizar la expresión (p^2 + 4pq + 4q^2), primero identifiquemos el factor común. En este caso, el factor común es (p + 2q). Factorizamos (p + 2q) de la expresión: (p + 2q)(p + 2q). Por lo tanto, la expresión factorizada es ((p + 2q)(p + 2q))."
}
},
{
"id": "39",
"translation": {
"es": "10x^2 + 20x + 10",
"pt": "Para factorizar la expresión (10x^2 + 20x + 10), primero identifiquemos el factor común. En este caso, el factor común es (10). Factorizamos (10) de cada término: 10(x^2 + 2x + 1). Ahora, factorizamos el trinomio cuadrado(x^2 + 2x + 1) utilizando la fórmula de la suma de dos cuadrados: (x + 1)^2. Entonces, la expresión factorizada es (10(x + 1)^2)."
}
},
{
"id": "40",
"translation": {
"es": "9x^2 - 25",
"pt": "Para factorizar la expresión (9x^2 - 25), primero identifiquemos el factor común. En este caso, el factor común es (1). Entonces, la expresión ya está factorizada en términos de primer grado."
}
},
{
"id": "41",
"translation": {
"es": "4x^2 - 9y^2",
"pt": "Para factorizar la expresión (4x^2 - 9y^2), primero identifiquemos el factor común. En este caso, el factor común es (1). Entonces, la expresión ya está factorizada en términos de primer grado."
}
},
{
"id": "42",
"translation": {
"es": "12x^3 - 18x^2 + 6x",
"pt": "Para factorizar la expresión (12x^3 - 18x^2 + 6x), primero identifiquemos el factor común. En este caso, el factor común es (6x). Factorizamos (6x) de cada término: 6x(2x^2 - 3x + 1). Ahora, factorizamos el trinomio cuadrado(2x^2 - 3x + 1) utilizando la fórmula de la factorización de x^2 + bx + c: (2x - 1)(x - 1). Entonces, la expresión factorizada es (6x(2x - 1)(x - 1))."
}
},
{
"id": "43",
"translation": {
"es": "10x^2-20x+30",
"pt": "Para factorizar la expresión (10x^2 - 20x + 30), primero identificamos el factor común: 10. Factorizamos 10 de cada término: 10(x^2 - 2x + 3). Ahora, necesitamos factorizar el trinomio cuadrático (x^2 - 2x + 3). Podemos usar el método del cuadrado perfecto para factorizarlo: (x - 1)^2. Por lo tanto, la expresión factorizada es 10(x - 1)^2."
}
},
{
"id": "44",
"translation": {
"es": "12x^3-9x^2+6x",
"pt": "Para factorizar la expresión (12x^3 - 9x^2 + 6x), primero identificamos el factor común: 3x. Factorizamos 3x de cada término: 3x(4x^2 - 3x + 2). Ahora, factorizamos el trinomio cuadrático (4x^2 - 3x + 2) usando el método de la factorización: (2x - 1)(2x - 2). Por lo tanto, la expresión factorizada es 3x(2x - 1)(2x - 2)."
}
},
{
"id": "45",
"translation": {
"es": "15x^4-20x^3+10x^2",
"pt": "Para factorizar la expresión (15x^4 - 20x^3 + 10x^2), primero identificamos el factor común: 5x^2. Factorizamos 5x^2 de cada término: 5x^2(3x^2 - 4x + 2). Ahora, factorizamos el trinomio cuadrático (3x^2 - 4x + 2) usando el método de la factorización: (3x - 2)(x - 1). Por lo tanto, la expresión factorizada es 5x^2(3x - 2)(x - 1)."
}
},
{
"id": "46",
"translation": {
"es": "2x^3-8x^2+6x",
"pt": "Para factorizar la expresión (2x^3 - 8x^2 + 6x), primero identificamos el factor común: 2x. Factorizamos 2x de cada término: 2x(x^2 - 4x + 3). Ahora, factorizamos el trinomio cuadrático (x^2 - 4x + 3) usando el método de la factorización: (x - 1)(x - 3). Por lo tanto, la expresión factorizada es 2x(x - 1)(x - 3)."
}
},
{
"id": "47",
"translation": {
"es": "12x^2 - 16x + 20x^3 - 28x^4",
"pt": "Para factorizar la expresión (12x^2 - 16x + 20x^3 - 28x^4), primero ordenemos los términos en orden descendente según las potencias de (x): -28x^4 + 20x^3 + 12x^2 - 16x. Ahora, identifiquemos el factor común. En este caso, el factor común es (4x). Factorizamos (4x) de cada término: 4x(-7x^3 + 5x^2 + 3x - 4). Entonces, la expresión factorizada es (4x(-7x^3 + 5x^2 + 3x - 4))."
}
},
{
"id": "48",
"translation": {
"es": "14x^2y^3 - 21xy^2 + 7xy - 14xy^4",
"pt": "Para factorizar la expresión (14x^2y^3 - 21xy^2 + 7xy - 14xy^4), primero ordenemos los términos en orden descendente según las potencias de (x) y (y): -14xy^4 + 14x^2y^3 - 21xy^2 + 7xy. Ahora, identifiquemos el factor común. En este caso, el factor común es (7xy). Factorizamos (7xy) de cada término: 7xy(-2y^3 + 2x^2 - 3y + 1). Entonces, la expresión factorizada es (7xy(-2y^3 + 2x^2 - 3y + 1))."
}
},
{
"id": "49",
"translation": {
"es": "8x^3 - 4x^2 + 12x - 6",
"pt": "Para factorizar la expresión (8x^3 - 4x^2 + 12x - 6), primero ordenemos los términos en orden descendente según las potencias de (x): 8x^3 - 4x^2 + 12x - 6. Ahora, identifiquemos el factor común. En este caso, el factor común es (2). Factorizamos (2) de cada término: 2(4x^3 - 2x^2 + 6x - 3). Entonces, la expresión factorizada es (2(4x^3 - 2x^2 + 6x - 3))."
}
},
{
"id": "50",
"translation": {
"es": "10x^2y^3 - 20xy + 30xy^2 - 15xy^4",
"pt": "Para factorizar la expresión (10x^2y^3 - 20xy + 30xy^2 - 15xy^4), primero ordenemos los términos en orden descendente según las potencias de (x) y (y): -15xy^4 + 10x^2y^3 + 30xy^2 - 20xy. Ahora, identifiquemos el factor común. En este caso, el factor común es (5xy). Factorizamos (5xy) de cada término: 5xy(-3y^3 + 2x^2 + 6y - 4). Entonces, la expresión factorizada es (5xy(-3y^3 + 2x^2 + 6y - 4))."
}
}
]
| spongebob01/formulas | [
"region:us"
] | 2024-01-15T06:37:14+00:00 | {} | 2024-01-15T08:00:49+00:00 | [] | [] | TAGS
#region-us
| [
{
"id": "0",
"translation": {
"es": "8a - 4b + 16c + 12d",
"pt": "Para factorizar la expresión (8a - 4b + 16c + 12d), primero agrupemos los términos de manera adecuada. La expresión se puede reorganizar en dos grupos: (8a - 4b) + (16c + 12d). Ahora, en cada grupo, factorizamos los términos comunes: Grupo 1: Factor común de (4) en (8a - 4b): 4(2a - b) . Grupo 2: Factor común de (4) en (16c + 12d): 4(4c + 3d). Finalmente, podemos escribir la expresión factorizada como la suma de los dos grupos factorizados: 4(2a - b) + 4(4c + 3d) "
}
},
{
"id": "1",
"translation": {
"es": "7x^2 + 11x^3 - 4x^5 + 3x^4 - x^8",
"pt": "Para factorizar la expresión (7x^2 + 11x^3 - 4x^5 + 3x^4 - x^8), primero ordenemos los términos en orden descendente según las potencias de (x): -x^8 - 4x^5 + 3x^4 + 11x^3 + 7x^2. Ahora, identifiquemos el factor común. En este caso, el factor común es (x^2). Factorizamos (x^2) de cada término: x^2(-x^6 - 4x^3 + 3x^2 + 11x + 7). Entonces, la expresión factorizada es (x^2(-x^6 - 4x^3 + 3x^2 + 11x + 7))."
}
},
{
"id": "2",
"translation": {
"es": "9x^3 - 6x^2 + 12x^5 - 18x^7",
"pt": "Para factorizar la expresión (9x^3 - 6x^2 + 12x^5 - 18x^7), primero ordenemos los términos en orden descendente según las potencias de (x): -18x^7 + 12x^5 + 9x^3 - 6x^2. Ahora, identifiquemos el factor común. En este caso, el factor común es (3x^2). Factorizamos (3x^2) de cada término: 3x^2(-6x^5 + 4x^3 + 3x - 2). Entonces, la expresión factorizada es (3x^2(-6x^5 + 4x^3 + 3x - 2))."
}
},
{
"id": "3",
"translation": {
"es": "5x+5y",
"pt": "La expresión (5x + 5y) ya está factorizada en términos de primer grado. Sin embargo, si deseas factorizar el término común, puedes hacerlo de la siguiente manera: 5x + 5y. Factor común: (5). 5(x + y). Entonces, la expresión factorizada es (5(x + y))."
}
},
{
"id": "4",
"translation": {
"es": "-6a-9b-3c",
"pt": "El proceso de factorización de la expresión (-6a - 9b - 3c): Agrupamos los términos: (-6a - 9b) - 3c. En el primer grupo, factorizamos el común factor (-3) de (-6a - 9b): -3(2a + 3b) - 3c. Ahora, podemos factorizar el común factor (-3) del primer grupo: -3(2a + 3b + c). Por lo tanto, la expresión (-6a - 9b - 3c) factoriza como (-3(2a + 3b + c))."
}
},
{
"id": "5",
"translation": {
"es": "x^2 + 2x",
"pt": "Para factorizar la expresión (x^2 + 2x). Vamos a realizar el proceso de factorización paso a paso. Factor común: Primero, observamos si hay algún factor común en ambos términos. En este caso, (x) es un factor común. x(x + 2). Verificación: Podemos verificar si hemos factorizado correctamente multiplicando los factores para asegurarnos de que obtenemos la expresión original. x(x + 2) = x^2 + 2x. La expresión original es igual a la factorización, por lo que hemos factorizado correctamente. Entonces, la factorización de (x^2 + 2x) es (x(x + 2))."
}
},
{
"id": "6",
"translation": {
"es": "4x^2 - 12x",
"pt": "Para factorizar la expresión (4x^2 - 12x), primero identificamos el factor común. En este caso, el factor común es (4x). Ahora, factorizamos (4x) de cada término: 4x(x - 3). Entonces, la expresión factorizada es (4x(x - 3))."
}
},
{
"id": "7",
"translation": {
"es": "2x^2 + 6xy + 4y^2",
"pt": "Para factorizar la expresión (2x^2 + 6xy + 4y^2), primero identificamos el factor común. En este caso, el factor común es (2). Ahora, factorizamos (2) de cada término: 2(x^2 + 3xy + 2y^2). Luego, observamos que (x^2 + 3xy + 2y^2) es una expresión cuadrática perfecta llamada cuadrado de un binomio. Por lo tanto, factorizamos esta expresión como sigue: (x + y)^2. Finalmente, la expresión factorizada es (2(x + y)^2)."
}
},
{
"id": "8",
"translation": {
"es": "x^2 - 9",
"pt": "Para factorizar la expresión (x^2 - 9), primero observamos que es una diferencia de cuadrados. Por lo tanto, podemos factorizarla como sigue: x^2 - 9 = (x + 3)(x - 3). Aquí, (x + 3) y (x - 3) son los dos factores binomiales que, al multiplicarlos, nos dan la expresión original."
}
},
{
"id": "9",
"translation": {
"es": "4x^2 - 25",
"pt": "Para factorizar la expresión (4x^2 - 25), primero observamos que es una diferencia de cuadrados. Por lo tanto, podemos factorizarla como sigue: 4x^2 - 25 = (2x + 5)(2x - 5). Aquí, (2x + 5) y (2x - 5) son los dos factores binomiales que, al multiplicarlos, nos dan la expresión original."
}
},
{
"id": "10",
"translation": {
"es": "x^3 - 8",
"pt": "Para factorizar la expresión (x^3 - 8), primero observamos que es un cubo perfecto. Por lo tanto, podemos factorizarla como sigue: x^3 - 8 = (x - 2)(x^2 + 2x + 4). Aquí, (x - 2) es el factor binomial y (x^2 + 2x + 4) es el factor trinomial que, al multiplicarlos, nos dan la expresión original."
}
},
{
"id": "11",
"translation": {
"es": "27x^3 + 8",
"pt": "Para factorizar la expresión (27x^3 + 8), primero observamos que es una suma de cubos. Por lo tanto, podemos factorizarla como sigue: 27x^3 + 8 = (3x + 2)(9x^2 - 6x + 4). Aquí, (3x + 2) es el factor binomial y (9x^2 - 6x + 4) es el factor trinomial que, al multiplicarlos, nos dan la expresión original."
}
},
{
"id": "12",
"translation": {
"es": "8x^2 - 6x",
"pt": "Para factorizar la expresión (8x^2 - 6x), primero observamos que el coeficiente del primer término es 8 y el coeficiente del segundo término es -6. Buscando dos números que multiplicados den 8 y sumados den -6, encontramos que los números son 8 y -1. Por lo tanto, podemos factorizar la expresión de la siguiente manera: 8x^2 - 6x = 8x^2 - 8x + 2x - 6. Ahora, podemos agrupar los términos: (8x^2 - 8x) + (2x - 6). Factoremos cada grupo: 8x(x - 1) + 2(x - 3). (x - 1) es un factor común en ambos términos, por lo tanto, factorizamos (x - 1) de ambos términos: (x - 1)(8x + 2). Finalmente, la expresión factorizada es (x - 1)(8x + 2)."
}
},
{
"id": "13",
"translation": {
"es": "12x^2 - 15x - 18",
"pt": "Para factorizar la expresión (12x^2 - 15x - 18), primero ordenamos los términos en orden descendente según las potencias de (x): 12x^2 - 15x - 18. Ahora, buscamos dos números que multiplicados den -18 y sumados den -15. Los números que cumplen estas condiciones son 9 y -2. Por lo tanto, podemos factorizar la expresión de la siguiente manera: 12x^2 - 15x - 18 = 12x^2 + 9x - 24x - 18. Ahora, podemos agrupar los términos: (12x^2 + 9x) - (24x + 18). Factoremos cada grupo: 3x(4x + 3) - 6(4x + 3). (4x + 3) es un factor común en ambos términos, por lo tanto, factorizamos (4x + 3) de ambos términos: (4x + 3)(3x - 6). Finalmente, la expresión factorizada es (4x + 3)(3x - 6)."
}
},
{
"id": "14",
"translation": {
"es": "x^3 - 2x^2 - 4x + 8",
"pt": "Para factorizar la expresión (x^3 - 2x^2 - 4x + 8), primero agrupamos los términos de manera adecuada. La expresión se puede reorganizar en dos grupos: (x^3 - 2x^2) - (4x - 8). Ahora, en cada grupo, factorizamos los términos comunes: Grupo 1: Factor común de (x^2) en (x^3 - 2x^2): x^2(x - 2). Grupo 2: Factor común de (4) en (4x - 8): 4(x - 2). Finalmente, podemos escribir la expresión factorizada como la suma de los dos grupos factorizados: x^2(x - 2) - 4(x - 2)."
}
},
{
"id": "15",
"translation": {
"es": "4x + 8y - 12z",
"pt": "Para factorizar la expresión (4x + 8y - 12z), primero debemos encontrar el factor común de los términos. En este caso, el factor común es (4). Luego, factorizamos el término común de cada uno de los términos: 4(x + 2y - 3z). Por lo tanto, la expresión factorizada es (4(x + 2y - 3z))."
}
},
{
"id": "16",
"translation": {
"es": "10a - 15b + 20c",
"pt": "Para factorizar la expresión (10a - 15b + 20c), primero debemos encontrar el factor común de los términos. En este caso, el factor común es (5). Luego, factorizamos el término común de cada uno de los términos: 5(2a - 3b + 4c). Por lo tanto, la expresión factorizada es (5(2a - 3b + 4c))."
}
},
{
"id": "17",
"translation": {
"es": "12x^2 + 18x^3 - 24x^4",
"pt": "Para factorizar la expresión (12x^2 + 18x^3 - 24x^4), primero debemos encontrar el factor común de los términos. En este caso, el factor común es (6x^2). Luego, factorizamos el término común de cada uno de los términos: 6x^2(2 + 3x - 4x^2). Por lo tanto, la expresión factorizada es (6x^2(2 + 3x - 4x^2))."
}
},
{
"id": "18",
"translation": {
"es": "8a^3 - 12a^2 + 16a",
"pt": "Para factorizar la expresión (8a^3 - 12a^2 + 16a), primero debemos encontrar el factor común de los términos. En este caso, el factor común es (4a). Luego, factorizamos el término común de cada uno de los términos: 4a(2a^2 - 3a + 4). Por lo tanto, la expresión factorizada es (4a(2a^2 - 3a + 4))."
}
},
{
"id": "19",
"translation": {
"es": "10x^2 - 15x",
"pt": "Para factorizar la expresión (10x^2 - 15x), primero identifiquemos el factor común: 5x. Factorizamos 5x de cada término: 5x(2x - 3). Entonces, la expresión factorizada es (5x(2x - 3))."
}
},
{
"id": "20",
"translation": {
"es": "8y^3 + 12y^2 - 4y",
"pt": "Para factorizar la expresión (8y^3 + 12y^2 - 4y), primero identifiquemos el factor común: 4y. Factorizamos 4y de cada término: 4y(2y^2 + 3y - 1). Entonces, la expresión factorizada es (4y(2y^2 + 3y - 1))."
}
},
{
"id": "21",
"translation": {
"es": "14a^3 - 21a^2 + 7a",
"pt": "Para factorizar la expresión (14a^3 - 21a^2 + 7a), primero identifiquemos el factor común: 7a. Factorizamos 7a de cada término: 7a(2a^2 - 3a + 1). Entonces, la expresión factorizada es (7a(2a^2 - 3a + 1))."
}
},
{
"id": "22",
"translation": {
"es": "9x^2 + 12xy + 4y^2",
"pt": "Para factorizar la expresión (9x^2 + 12xy + 4y^2), primero ordenemos los términos de manera adecuada. La expresión se puede reorganizar en tres grupos: (9x^2 + 12xy) + (4y^2). Ahora, en cada grupo, factorizamos los términos comunes: Grupo 1: Factor común de (3x) en (9x^2 + 12xy): 3x(3x + 4y). Grupo 2: Factor común de (4) en (4y^2): 4y^2. Finalmente, podemos escribir la expresión factorizada como la suma de los dos grupos factorizados: 3x(3x + 4y) + 4y^2."
}
},
{
"id": "23",
"translation": {
"es": "3(x^2 + 2x + 1) - 4(2x^2 - 3x + 5)",
"pt": "Para factorizar esta expresión, empezaremos extrayendo el factor común más grande de cada término. En este caso, el factor común es (x). 3(x^2 + 2x + 1) - 4(2x^2 - 3x + 5). Factorizando el factor común, obtenemos: 3(x(x + 2 + 1)) - 4(2x(x - 3/2 + 5/2)). Expandiendo los términos, tenemos: 3(x(x + 3)) - 4(2x(x + 11/2)). Ahora, podemos simplificar la expresión combinando los términos semejantes: 3x(x + 3) - 8x(x + 11/2). Finalmente, la expresión factorizada es: 3x(x + 3) - 8x(x + 11/2)."
}
},
{
"id": "24",
"translation": {
"es": "7x^2y^3 + 14x^3y^4 - 21xy^5",
"pt": "Para factorizar esta expresión, primero buscamos el factor común más grande de todos los términos. En este caso, el factor común es (xy^2). 7x^2y^3 + 14x^3y^4 - 21xy^5. Factorizando el factor común, obtenemos: xy^2(7xy + 14x^2y^2 - 21y^3). Ahora, podemos factorizar aún más el término dentro del paréntesis extrayendo el factor común (7y). xy^2(7xy + 14x^2y^2 - 21y^3). Factoreando el factor común, tenemos: xy^2(7y(x + 2xy^2 - 3y^2)). Por último, podemos simplificar la expresión combinando los términos semejantes: xy^2(7y(x - 3y^2 + 2xy^2)). Finalmente, la expresión factorizada es: xy^2(7y(x - 3y^2 + 2xy^2))."
}
},
{
"id": "25",
"translation": {
"es": "-2x^2 + 8x + 6",
"pt": "Para factorizar esta expresión, empezaremos extrayendo el factor común más grande de cada término. En este caso, el factor común es (2). -2x^2 + 8x + 6. Factorizando el factor común, obtenemos: 2(-x^2 + 4x + 3). Ahora, podemos factorizar aún más el término dentro del paréntesis extrayendo el factor común (x - 1). 2(-(x^2 - 4x - 3)). Por último, podemos simplificar la expresión combinando los términos semejantes: 2(-(x - 1)(x - 3)). Finalmente, la expresión factorizada es: 2(x - 1)(x - 3)."
}
},
{
"id": "26",
"translation": {
"es": "3x^3 - 9x^2 - 12x",
"pt": "Para factorizar esta expresión, empezaremos extrayendo el factor común más grande de cada término. En este caso, el factor común es (3x). 3x^3 - 9x^2 - 12x. Factorizando el factor común, obtenemos: 3x(x^2 - 3x - 4). Ahora, podemos factorizar aún más el término dentro del paréntesis extrayendo el factor común (x + 1). 3x((x^2 - 3x - 4)). Por último, podemos simplificar la expresión combinando los términos semejantes: 3x(x + 1)(x - 4). Finalmente, la expresión factorizada es: 3x(x + 1)(x - 4)."
}
},
{
"id": "27",
"translation": {
"es": "2x^2 + 3xy - 5y^2",
"pt": "Para factorizar esta expresión, empezaremos extrayendo el factor común más grande de cada término. En este caso, no hay un factor común obvio. Por lo tanto, podemos intentar factorizar por agrupación. 2x^2 + 3xy - 5y^2. Agrupando los dos primeros términos y los dos últimos términos, obtenemos: (2x^2 + 3xy) - (5y^2). Ahora, podemos factorizar cada grupo. 2x^2 + 3xy = x(2x + 3y). 5y^2 = 5y(y). Finalmente, la expresión factorizada es: x(2x + 3y) - 5y(y)."
}
},
{
"id": "28",
"translation": {
"es": "3x^2 - 9x + 6",
"pt": "Para factorizar la expresión (3x^2 - 9x + 6), primero verificamos si se puede extraer un factor común. En este caso, el factor común es (3): 3(x^2 - 3x + 2). Luego, factorizamos el polinomio cuadrático (x^2 - 3x + 2): (x - 1)(x - 2). Por lo tanto, la expresión factorizada es 3(x - 1)(x - 2)."
}
},
{
"id": "29",
"translation": {
"es": "2a^2 - 8a + 6",
"pt": "Para factorizar la expresión (2a^2 - 8a + 6), primero verificamos si se puede extraer un factor común. En este caso, el factor común es (2): 2(a^2 - 4a + 3). Luego, factorizamos el polinomio cuadrático (a^2 - 4a + 3): (a - 1)(a - 3). Por lo tanto, la expresión factorizada es 2(a - 1)(a - 3)."
}
},
{
"id": "30",
"translation": {
"es": "4x^2 + 12x + 9",
"pt": "Para factorizar la expresión (4x^2 + 12x + 9), primero verificamos si se puede extraer un factor común. En este caso, el factor común es (1): 1(4x^2 + 12x + 9). Luego, factorizamos el polinomio cuadrático (4x^2 + 12x + 9): (2x + 3)(2x + 3). Por lo tanto, la expresión factorizada es (2x + 3)^2."
}
},
{
"id": "31",
"translation": {
"es": "x^2 - 5x - 14",
"pt": "Para factorizar la expresión (x^2 - 5x - 14), primero verificamos si se puede extraer un factor común. En este caso, no hay un factor común. Por lo tanto, factorizamos el polinomio cuadrático (x^2 - 5x - 14): (x - 7)(x + 2). Por lo tanto, la expresión factorizada es (x - 7)(x + 2)."
}
},
{
"id": "32",
"translation": {
"es": "2x^2 + 5x + 3",
"pt": "Para factorizar la expresión (2x^2 + 5x + 3), primero verificamos si se puede extraer un factor común. En este caso, no hay un factor común. Por lo tanto, factorizamos el polinomio cuadrático (2x^2 + 5x + 3): (2x + 3)(x + 1). Por lo tanto, la expresión factorizada es (2x + 3)(x + 1)."
}
},
{
"id": "33",
"translation": {
"es": "3x^2 - 4x - 7",
"pt": "Para factorizar la expresión (3x^2 - 4x - 7), primero verificamos si se puede extraer un factor común. En este caso, no hay un factor común. Por lo tanto, factorizamos el polinomio cuadrático (3x^2 - 4x - 7): (3x + 1)(x - 7). Por lo tanto, la expresión factorizada es (3x + 1)(x - 7)."
}
},
{
"id": "34",
"translation": {
"es": "4x^2 + 9x + 5",
"pt": "Para factorizar la expresión (4x^2 + 9x + 5), primero verificamos si se puede extraer un factor común. En este caso, no hay un factor común. Por lo tanto, factorizamos el polinomio cuadrático (4x^2 + 9x + 5): (4x + 5)(x + 1). Por lo tanto, la expresión factorizada es (4x + 5)(x + 1)."
}
},
{
"id": "35",
"translation": {
"es": "3x^2 - 12x + 12",
"pt": "Para factorizar la expresión (3x^2 - 12x + 12), primero sacamos el factor común (3) de todos los términos: 3(x^2 - 4x + 4). Ahora, reconocemos que la expresión dentro del paréntesis es un cuadrado perfecto: (x - 2)^2. Entonces, la expresión factorizada es 3(x - 2)^2."
}
},
{
"id": "36",
"translation": {
"es": "2x(x - 1) + 3(x - 1)",
"pt": "Para factorizar la expresión (2x(x - 1) + 3(x - 1)), primero identificamos el factor común en ambos términos: (x - 1). Factorizamos (x - 1) de la expresión: (x - 1)(2x + 3). Por lo tanto, la expresión factorizada es ((x - 1)(2x + 3))."
}
},
{
"id": "37",
"translation": {
"es": "p^2 + 4pq + 4q^2",
"pt": "Para factorizar la expresión (p^2 + 4pq + 4q^2), primero identifiquemos el factor común. En este caso, el factor común es (p + 2q). Factorizamos (p + 2q) de la expresión: (p + 2q)(p + 2q). Por lo tanto, la expresión factorizada es ((p + 2q)(p + 2q))."
}
},
{
"id": "38",
"translation": {
"es": "p^2 + 4pq + 4q^2",
"pt": "Para factorizar la expresión (p^2 + 4pq + 4q^2), primero identifiquemos el factor común. En este caso, el factor común es (p + 2q). Factorizamos (p + 2q) de la expresión: (p + 2q)(p + 2q). Por lo tanto, la expresión factorizada es ((p + 2q)(p + 2q))."
}
},
{
"id": "39",
"translation": {
"es": "10x^2 + 20x + 10",
"pt": "Para factorizar la expresión (10x^2 + 20x + 10), primero identifiquemos el factor común. En este caso, el factor común es (10). Factorizamos (10) de cada término: 10(x^2 + 2x + 1). Ahora, factorizamos el trinomio cuadrado(x^2 + 2x + 1) utilizando la fórmula de la suma de dos cuadrados: (x + 1)^2. Entonces, la expresión factorizada es (10(x + 1)^2)."
}
},
{
"id": "40",
"translation": {
"es": "9x^2 - 25",
"pt": "Para factorizar la expresión (9x^2 - 25), primero identifiquemos el factor común. En este caso, el factor común es (1). Entonces, la expresión ya está factorizada en términos de primer grado."
}
},
{
"id": "41",
"translation": {
"es": "4x^2 - 9y^2",
"pt": "Para factorizar la expresión (4x^2 - 9y^2), primero identifiquemos el factor común. En este caso, el factor común es (1). Entonces, la expresión ya está factorizada en términos de primer grado."
}
},
{
"id": "42",
"translation": {
"es": "12x^3 - 18x^2 + 6x",
"pt": "Para factorizar la expresión (12x^3 - 18x^2 + 6x), primero identifiquemos el factor común. En este caso, el factor común es (6x). Factorizamos (6x) de cada término: 6x(2x^2 - 3x + 1). Ahora, factorizamos el trinomio cuadrado(2x^2 - 3x + 1) utilizando la fórmula de la factorización de x^2 + bx + c: (2x - 1)(x - 1). Entonces, la expresión factorizada es (6x(2x - 1)(x - 1))."
}
},
{
"id": "43",
"translation": {
"es": "10x^2-20x+30",
"pt": "Para factorizar la expresión (10x^2 - 20x + 30), primero identificamos el factor común: 10. Factorizamos 10 de cada término: 10(x^2 - 2x + 3). Ahora, necesitamos factorizar el trinomio cuadrático (x^2 - 2x + 3). Podemos usar el método del cuadrado perfecto para factorizarlo: (x - 1)^2. Por lo tanto, la expresión factorizada es 10(x - 1)^2."
}
},
{
"id": "44",
"translation": {
"es": "12x^3-9x^2+6x",
"pt": "Para factorizar la expresión (12x^3 - 9x^2 + 6x), primero identificamos el factor común: 3x. Factorizamos 3x de cada término: 3x(4x^2 - 3x + 2). Ahora, factorizamos el trinomio cuadrático (4x^2 - 3x + 2) usando el método de la factorización: (2x - 1)(2x - 2). Por lo tanto, la expresión factorizada es 3x(2x - 1)(2x - 2)."
}
},
{
"id": "45",
"translation": {
"es": "15x^4-20x^3+10x^2",
"pt": "Para factorizar la expresión (15x^4 - 20x^3 + 10x^2), primero identificamos el factor común: 5x^2. Factorizamos 5x^2 de cada término: 5x^2(3x^2 - 4x + 2). Ahora, factorizamos el trinomio cuadrático (3x^2 - 4x + 2) usando el método de la factorización: (3x - 2)(x - 1). Por lo tanto, la expresión factorizada es 5x^2(3x - 2)(x - 1)."
}
},
{
"id": "46",
"translation": {
"es": "2x^3-8x^2+6x",
"pt": "Para factorizar la expresión (2x^3 - 8x^2 + 6x), primero identificamos el factor común: 2x. Factorizamos 2x de cada término: 2x(x^2 - 4x + 3). Ahora, factorizamos el trinomio cuadrático (x^2 - 4x + 3) usando el método de la factorización: (x - 1)(x - 3). Por lo tanto, la expresión factorizada es 2x(x - 1)(x - 3)."
}
},
{
"id": "47",
"translation": {
"es": "12x^2 - 16x + 20x^3 - 28x^4",
"pt": "Para factorizar la expresión (12x^2 - 16x + 20x^3 - 28x^4), primero ordenemos los términos en orden descendente según las potencias de (x): -28x^4 + 20x^3 + 12x^2 - 16x. Ahora, identifiquemos el factor común. En este caso, el factor común es (4x). Factorizamos (4x) de cada término: 4x(-7x^3 + 5x^2 + 3x - 4). Entonces, la expresión factorizada es (4x(-7x^3 + 5x^2 + 3x - 4))."
}
},
{
"id": "48",
"translation": {
"es": "14x^2y^3 - 21xy^2 + 7xy - 14xy^4",
"pt": "Para factorizar la expresión (14x^2y^3 - 21xy^2 + 7xy - 14xy^4), primero ordenemos los términos en orden descendente según las potencias de (x) y (y): -14xy^4 + 14x^2y^3 - 21xy^2 + 7xy. Ahora, identifiquemos el factor común. En este caso, el factor común es (7xy). Factorizamos (7xy) de cada término: 7xy(-2y^3 + 2x^2 - 3y + 1). Entonces, la expresión factorizada es (7xy(-2y^3 + 2x^2 - 3y + 1))."
}
},
{
"id": "49",
"translation": {
"es": "8x^3 - 4x^2 + 12x - 6",
"pt": "Para factorizar la expresión (8x^3 - 4x^2 + 12x - 6), primero ordenemos los términos en orden descendente según las potencias de (x): 8x^3 - 4x^2 + 12x - 6. Ahora, identifiquemos el factor común. En este caso, el factor común es (2). Factorizamos (2) de cada término: 2(4x^3 - 2x^2 + 6x - 3). Entonces, la expresión factorizada es (2(4x^3 - 2x^2 + 6x - 3))."
}
},
{
"id": "50",
"translation": {
"es": "10x^2y^3 - 20xy + 30xy^2 - 15xy^4",
"pt": "Para factorizar la expresión (10x^2y^3 - 20xy + 30xy^2 - 15xy^4), primero ordenemos los términos en orden descendente según las potencias de (x) y (y): -15xy^4 + 10x^2y^3 + 30xy^2 - 20xy. Ahora, identifiquemos el factor común. En este caso, el factor común es (5xy). Factorizamos (5xy) de cada término: 5xy(-3y^3 + 2x^2 + 6y - 4). Entonces, la expresión factorizada es (5xy(-3y^3 + 2x^2 + 6y - 4))."
}
}
]
| [] | [
"TAGS\n#region-us \n"
] |
f125808ce749c0f57851970723046d1581eef5ba |
# Dataset of tatari_kogasa/祟小傘 (Touhou)
This is the dataset of tatari_kogasa/祟小傘 (Touhou), containing 27 images and their tags.
The core tags of this character are `blue_hair, red_eyes, blue_eyes, heterochromia, breasts, short_hair, medium_breasts, large_breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 27 | 25.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tatari_kogasa_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 27 | 16.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tatari_kogasa_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 52 | 30.49 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tatari_kogasa_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 27 | 22.84 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tatari_kogasa_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 52 | 40.82 MiB | [Download](https://huggingface.co/datasets/CyberHarem/tatari_kogasa_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/tatari_kogasa_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-----------------------------------------------------------------------------------------------------------------------|
| 0 | 18 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, nipples, blush, karakasa_obake, purple_umbrella, tongue, navel, panties, nude, open_clothes, pussy, shirt |
| 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, alternate_hair_length, long_hair, solo, dress, smile, aged_up, cleavage |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | nipples | blush | karakasa_obake | purple_umbrella | tongue | navel | panties | nude | open_clothes | pussy | shirt | alternate_hair_length | long_hair | dress | smile | aged_up | cleavage |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:----------|:--------|:-----------------|:------------------|:---------|:--------|:----------|:-------|:---------------|:--------|:--------|:------------------------|:------------|:--------|:--------|:----------|:-----------|
| 0 | 18 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | |
| 1 | 6 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | | | | | | | | | | | X | X | X | X | X | X |
| CyberHarem/tatari_kogasa_touhou | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T07:20:09+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T07:24:18+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of tatari\_kogasa/祟小傘 (Touhou)
======================================
This is the dataset of tatari\_kogasa/祟小傘 (Touhou), containing 27 images and their tags.
The core tags of this character are 'blue\_hair, red\_eyes, blue\_eyes, heterochromia, breasts, short\_hair, medium\_breasts, large\_breasts', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
cc0ffb66a1b325cbca08fb0021565ddb161de2d4 |
Dataset of URLs of articles on Zenn ([zenn.dev](https://zenn.dev/))
| p1atdev/zenn-articles-20240115 | [
"size_categories:10K<n<100K",
"language:ja",
"license:cc0-1.0",
"code",
"region:us"
] | 2024-01-15T08:03:26+00:00 | {"language": ["ja"], "license": "cc0-1.0", "size_categories": ["10K<n<100K"], "tags": ["code"]} | 2024-01-15T08:12:24+00:00 | [] | [
"ja"
] | TAGS
#size_categories-10K<n<100K #language-Japanese #license-cc0-1.0 #code #region-us
|
Dataset of URLs of articles on Zenn (URL)
| [] | [
"TAGS\n#size_categories-10K<n<100K #language-Japanese #license-cc0-1.0 #code #region-us \n"
] |
83f162f482ae8aaf4ffa7cfbab0f22f7f7a7a584 | # Dataset Card for "bagel_sft_binarized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jan-hq/bagel_sft_binarized | [
"region:us"
] | 2024-01-15T08:09:40+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 956516282.0643299, "num_examples": 562673}, {"name": "test", "num_bytes": 50344035.86689805, "num_examples": 29615}], "download_size": 628477131, "dataset_size": 1006860317.9312279}} | 2024-01-15T08:10:33+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "bagel_sft_binarized"
More Information needed | [
"# Dataset Card for \"bagel_sft_binarized\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"bagel_sft_binarized\"\n\nMore Information needed"
] |
1e129f07e29f756dbc2039c0ad2e5de8bf8fbce8 |
# Dataset of orange (Touhou)
This is the dataset of orange (Touhou), containing 48 images and their tags.
The core tags of this character are `long_hair, red_hair, hat, red_eyes, bow, hair_bow, ribbon`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:---------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 48 | 27.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/orange_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 48 | 22.45 MiB | [Download](https://huggingface.co/datasets/CyberHarem/orange_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 76 | 36.10 MiB | [Download](https://huggingface.co/datasets/CyberHarem/orange_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 48 | 26.90 MiB | [Download](https://huggingface.co/datasets/CyberHarem/orange_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 76 | 41.66 MiB | [Download](https://huggingface.co/datasets/CyberHarem/orange_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/orange_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 23 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, puffy_short_sleeves, shirt, shoes, vest, white_bow, yellow_headwear, yellow_shorts, full_body, smile, holding, open_mouth, socks, looking_at_viewer, simple_background, white_background, white_footwear |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | puffy_short_sleeves | shirt | shoes | vest | white_bow | yellow_headwear | yellow_shorts | full_body | smile | holding | open_mouth | socks | looking_at_viewer | simple_background | white_background | white_footwear |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:----------------------|:--------|:--------|:-------|:------------|:------------------|:----------------|:------------|:--------|:----------|:-------------|:--------|:--------------------|:--------------------|:-------------------|:-----------------|
| 0 | 23 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/orange_touhou | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T08:24:01+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T08:42:52+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of orange (Touhou)
==========================
This is the dataset of orange (Touhou), containing 48 images and their tags.
The core tags of this character are 'long\_hair, red\_hair, hat, red\_eyes, bow, hair\_bow, ribbon', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
7aa94d4524a53130cd19fd5e583f6ea521a28b17 |
# Dataset of nishida_satono/里乃爾子田 (Touhou)
This is the dataset of nishida_satono/里乃爾子田 (Touhou), containing 315 images and their tags.
The core tags of this character are `brown_hair, hat, black_headwear, bangs, bow, purple_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 315 | 249.33 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nishida_satono_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 315 | 181.19 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nishida_satono_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 633 | 339.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nishida_satono_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 315 | 233.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nishida_satono_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 633 | 421.00 MiB | [Download](https://huggingface.co/datasets/CyberHarem/nishida_satono_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/nishida_satono_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 9 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 2girls, pink_dress, puffy_short_sleeves, short_hair_with_long_locks, smile, green_dress, looking_at_viewer, tate_eboshi, waist_apron, green_hair, pink_eyes, solo_focus, bamboo, frills, white_apron, holding, purple_dress |
| 1 | 24 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, puffy_short_sleeves, short_hair_with_long_locks, solo, waist_apron, looking_at_viewer, pink_dress, smile, open_mouth, tate_eboshi, white_apron, pink_eyes, blush, breasts, holding |
| 2 | 42 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | black_socks, short_hair_with_long_locks, pink_dress, looking_at_viewer, smile, tate_eboshi, waist_apron, 1girl, solo, kneehighs, pink_footwear, puffy_short_sleeves, holding, simple_background, full_body, mary_janes, white_background, open_mouth, purple_dress |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 2girls | pink_dress | puffy_short_sleeves | short_hair_with_long_locks | smile | green_dress | looking_at_viewer | tate_eboshi | waist_apron | green_hair | pink_eyes | solo_focus | bamboo | frills | white_apron | holding | purple_dress | 1girl | solo | open_mouth | blush | breasts | black_socks | kneehighs | pink_footwear | simple_background | full_body | mary_janes | white_background |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------|:-------------|:----------------------|:-----------------------------|:--------|:--------------|:--------------------|:--------------|:--------------|:-------------|:------------|:-------------|:---------|:---------|:--------------|:----------|:---------------|:--------|:-------|:-------------|:--------|:----------|:--------------|:------------|:----------------|:--------------------|:------------|:-------------|:-------------------|
| 0 | 9 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | |
| 1 | 24 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | | X | X | X | X | | X | X | X | | X | | | | X | X | | X | X | X | X | X | | | | | | | |
| 2 | 42 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | | X | X | X | X | | X | X | X | | | | | | | X | X | X | X | X | | | X | X | X | X | X | X | X |
| CyberHarem/nishida_satono_touhou | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T08:24:20+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T09:37:38+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of nishida\_satono/里乃爾子田 (Touhou)
=========================================
This is the dataset of nishida\_satono/里乃爾子田 (Touhou), containing 315 images and their tags.
The core tags of this character are 'brown\_hair, hat, black\_headwear, bangs, bow, purple\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
d4acf71b760e73a269d93447ef2e56940595befd | ERROR: type should be string, got "\nhttps://huggingface.co/datasets/jkhedri/psychology-dataset\n\nthis but split for DPO and regular sft. for fun, nothing serious" | Sao10K/psychology-dataset-pairs | [
"region:us"
] | 2024-01-15T08:47:58+00:00 | {} | 2024-01-15T08:49:08+00:00 | [] | [] | TAGS
#region-us
|
URL
this but split for DPO and regular sft. for fun, nothing serious | [] | [
"TAGS\n#region-us \n"
] |
d79af07e969a6678fcbbe819956840425816468f | # Norwegian Courts
Parallel corpus of Nynorsk and Bokmål from Norwegian Court transcriptions.
The data originates from the [OPUS project](https://opus.nlpl.eu/ELRC-Courts_Norway-v1.php). | kardosdrur/norwegian-courts | [
"task_categories:sentence-similarity",
"language:nb",
"language:nn",
"license:mit",
"region:us"
] | 2024-01-15T08:50:13+00:00 | {"language": ["nb", "nn"], "license": "mit", "task_categories": ["sentence-similarity"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "nb", "dtype": "string"}, {"name": "nn", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 156464.72295514512, "num_examples": 909}, {"name": "test", "num_bytes": 39245.27704485488, "num_examples": 228}], "download_size": 120454, "dataset_size": 195710}} | 2024-01-15T08:53:19+00:00 | [] | [
"nb",
"nn"
] | TAGS
#task_categories-sentence-similarity #language-Norwegian Bokmål #language-Norwegian Nynorsk #license-mit #region-us
| # Norwegian Courts
Parallel corpus of Nynorsk and Bokmål from Norwegian Court transcriptions.
The data originates from the OPUS project. | [
"# Norwegian Courts\n\nParallel corpus of Nynorsk and Bokmål from Norwegian Court transcriptions.\nThe data originates from the OPUS project."
] | [
"TAGS\n#task_categories-sentence-similarity #language-Norwegian Bokmål #language-Norwegian Nynorsk #license-mit #region-us \n",
"# Norwegian Courts\n\nParallel corpus of Nynorsk and Bokmål from Norwegian Court transcriptions.\nThe data originates from the OPUS project."
] |
afba34367a8609a1d0044eded531548ab71a58cf |
<h1 align="center"> Executable Code Actions Elicit Better LLM Agents </h1>
<p align="center">
<a href="https://github.com/xingyaoww/code-act">💻 Code</a>
•
<a href="https://arxiv.org/abs/2402.01030">📃 Paper</a>
•
<a href="https://huggingface.co/datasets/xingyaoww/code-act" >🤗 Data (CodeActInstruct)</a>
•
<a href="https://huggingface.co/xingyaoww/CodeActAgent-Mistral-7b-v0.1" >🤗 Model (CodeActAgent-Mistral-7b-v0.1)</a>
•
<a href="https://chat.xwang.dev/">🤖 Chat with CodeActAgent!</a>
</p>
We propose to use executable Python **code** to consolidate LLM agents’ **act**ions into a unified action space (**CodeAct**).
Integrated with a Python interpreter, CodeAct can execute code actions and dynamically revise prior actions or emit new actions upon new observations (e.g., code execution results) through multi-turn interactions.
![Overview](https://github.com/xingyaoww/code-act/blob/main/figures/overview.png?raw=true)
## Why CodeAct?
Our extensive analysis of 17 LLMs on API-Bank and a newly curated benchmark [M<sup>3</sup>ToolEval](docs/EVALUATION.md) shows that CodeAct outperforms widely used alternatives like Text and JSON (up to 20% higher success rate). Please check our paper for more detailed analysis!
![Comparison between CodeAct and Text/JSON](https://github.com/xingyaoww/code-act/blob/main/figures/codeact-comparison-table.png?raw=true)
*Comparison between CodeAct and Text / JSON as action.*
![Comparison between CodeAct and Text/JSON](https://github.com/xingyaoww/code-act/blob/main/figures/codeact-comparison-perf.png?raw=true)
*Quantitative results comparing CodeAct and {Text, JSON} on M<sup>3</sup>ToolEval.*
## 📁 CodeActInstruct
We collect an instruction-tuning dataset CodeActInstruct that consists of 7k multi-turn interactions using CodeAct. Dataset is release at [huggingface dataset 🤗](https://huggingface.co/datasets/xingyaoww/code-act). Please refer to the paper and [this section](#-data-generation-optional) for details of data collection.
![Data Statistics](https://github.com/xingyaoww/code-act/blob/main/figures/data-stats.png?raw=true)
*Dataset Statistics. Token statistics are computed using Llama-2 tokenizer.*
## 🪄 CodeActAgent
Trained on **CodeActInstruct** and general conversaions, **CodeActAgent** excels at out-of-domain agent tasks compared to open-source models of the same size, while not sacrificing generic performance (e.g., knowledge, dialog). We release two variants of CodeActAgent:
- **CodeActAgent-Mistral-7b-v0.1** (recommended, [model link](https://huggingface.co/xingyaoww/CodeActAgent-Mistral-7b-v0.1)): using Mistral-7b-v0.1 as the base model with 32k context window.
- **CodeActAgent-Llama-7b** ([model link](https://huggingface.co/xingyaoww/CodeActAgent-Llama-2-7b)): using Llama-2-7b as the base model with 4k context window.
![Model Performance](https://github.com/xingyaoww/code-act/blob/main/figures/model-performance.png?raw=true)
*Evaluation results for CodeActAgent. ID and OD stand for in-domain and out-of-domain evaluation correspondingly. Overall averaged performance normalizes the MT-Bench score to be consistent with other tasks and excludes in-domain tasks for fair comparison.*
Please check out [our paper](TODO) and [code](https://github.com/xingyaoww/code-act) for more details about data collection, model training, and evaluation.
## 📚 Citation
```bibtex
@misc{wang2024executable,
title={Executable Code Actions Elicit Better LLM Agents},
author={Xingyao Wang and Yangyi Chen and Lifan Yuan and Yizhe Zhang and Yunzhu Li and Hao Peng and Heng Ji},
year={2024},
eprint={2402.01030},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| xingyaoww/code-act | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"llm-agent",
"llm",
"instruction-tuning",
"arxiv:2402.01030",
"region:us"
] | 2024-01-15T08:59:02+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-generation"], "configs": [{"config_name": "default", "data_files": [{"split": "codeact", "path": "data/codeact-*"}, {"split": "general", "path": "data/general-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "conversations", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "codeact", "num_bytes": 34936511, "num_examples": 7139}, {"name": "general", "num_bytes": 250817144, "num_examples": 71246}], "download_size": 123084833, "dataset_size": 285753655}, "tags": ["llm-agent", "llm", "instruction-tuning"]} | 2024-02-05T05:23:24+00:00 | [
"2402.01030"
] | [
"en"
] | TAGS
#task_categories-text-generation #size_categories-1K<n<10K #language-English #license-apache-2.0 #llm-agent #llm #instruction-tuning #arxiv-2402.01030 #region-us
|
<h1 align="center"> Executable Code Actions Elicit Better LLM Agents </h1>
<p align="center">
<a href="URL Code</a>
•
<a href="URL Paper</a>
•
<a href="URL > Data (CodeActInstruct)</a>
•
<a href="URL > Model (CodeActAgent-Mistral-7b-v0.1)</a>
•
<a href="URL Chat with CodeActAgent!</a>
</p>
We propose to use executable Python code to consolidate LLM agents’ actions into a unified action space (CodeAct).
Integrated with a Python interpreter, CodeAct can execute code actions and dynamically revise prior actions or emit new actions upon new observations (e.g., code execution results) through multi-turn interactions.
!Overview
## Why CodeAct?
Our extensive analysis of 17 LLMs on API-Bank and a newly curated benchmark M<sup>3</sup>ToolEval shows that CodeAct outperforms widely used alternatives like Text and JSON (up to 20% higher success rate). Please check our paper for more detailed analysis!
!Comparison between CodeAct and Text/JSON
*Comparison between CodeAct and Text / JSON as action.*
!Comparison between CodeAct and Text/JSON
*Quantitative results comparing CodeAct and {Text, JSON} on M<sup>3</sup>ToolEval.*
## CodeActInstruct
We collect an instruction-tuning dataset CodeActInstruct that consists of 7k multi-turn interactions using CodeAct. Dataset is release at huggingface dataset . Please refer to the paper and this section for details of data collection.
!Data Statistics
*Dataset Statistics. Token statistics are computed using Llama-2 tokenizer.*
## CodeActAgent
Trained on CodeActInstruct and general conversaions, CodeActAgent excels at out-of-domain agent tasks compared to open-source models of the same size, while not sacrificing generic performance (e.g., knowledge, dialog). We release two variants of CodeActAgent:
- CodeActAgent-Mistral-7b-v0.1 (recommended, model link): using Mistral-7b-v0.1 as the base model with 32k context window.
- CodeActAgent-Llama-7b (model link): using Llama-2-7b as the base model with 4k context window.
!Model Performance
*Evaluation results for CodeActAgent. ID and OD stand for in-domain and out-of-domain evaluation correspondingly. Overall averaged performance normalizes the MT-Bench score to be consistent with other tasks and excludes in-domain tasks for fair comparison.*
Please check out our paper and code for more details about data collection, model training, and evaluation.
## Citation
| [
"## Why CodeAct?\n\nOur extensive analysis of 17 LLMs on API-Bank and a newly curated benchmark M<sup>3</sup>ToolEval shows that CodeAct outperforms widely used alternatives like Text and JSON (up to 20% higher success rate). Please check our paper for more detailed analysis!\n\n!Comparison between CodeAct and Text/JSON\n*Comparison between CodeAct and Text / JSON as action.*\n\n!Comparison between CodeAct and Text/JSON\n*Quantitative results comparing CodeAct and {Text, JSON} on M<sup>3</sup>ToolEval.*",
"## CodeActInstruct\n\nWe collect an instruction-tuning dataset CodeActInstruct that consists of 7k multi-turn interactions using CodeAct. Dataset is release at huggingface dataset . Please refer to the paper and this section for details of data collection.\n\n\n!Data Statistics\n*Dataset Statistics. Token statistics are computed using Llama-2 tokenizer.*",
"## CodeActAgent\n\nTrained on CodeActInstruct and general conversaions, CodeActAgent excels at out-of-domain agent tasks compared to open-source models of the same size, while not sacrificing generic performance (e.g., knowledge, dialog). We release two variants of CodeActAgent:\n- CodeActAgent-Mistral-7b-v0.1 (recommended, model link): using Mistral-7b-v0.1 as the base model with 32k context window.\n- CodeActAgent-Llama-7b (model link): using Llama-2-7b as the base model with 4k context window.\n\n!Model Performance\n*Evaluation results for CodeActAgent. ID and OD stand for in-domain and out-of-domain evaluation correspondingly. Overall averaged performance normalizes the MT-Bench score to be consistent with other tasks and excludes in-domain tasks for fair comparison.*\n\n\nPlease check out our paper and code for more details about data collection, model training, and evaluation.",
"## Citation"
] | [
"TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #language-English #license-apache-2.0 #llm-agent #llm #instruction-tuning #arxiv-2402.01030 #region-us \n",
"## Why CodeAct?\n\nOur extensive analysis of 17 LLMs on API-Bank and a newly curated benchmark M<sup>3</sup>ToolEval shows that CodeAct outperforms widely used alternatives like Text and JSON (up to 20% higher success rate). Please check our paper for more detailed analysis!\n\n!Comparison between CodeAct and Text/JSON\n*Comparison between CodeAct and Text / JSON as action.*\n\n!Comparison between CodeAct and Text/JSON\n*Quantitative results comparing CodeAct and {Text, JSON} on M<sup>3</sup>ToolEval.*",
"## CodeActInstruct\n\nWe collect an instruction-tuning dataset CodeActInstruct that consists of 7k multi-turn interactions using CodeAct. Dataset is release at huggingface dataset . Please refer to the paper and this section for details of data collection.\n\n\n!Data Statistics\n*Dataset Statistics. Token statistics are computed using Llama-2 tokenizer.*",
"## CodeActAgent\n\nTrained on CodeActInstruct and general conversaions, CodeActAgent excels at out-of-domain agent tasks compared to open-source models of the same size, while not sacrificing generic performance (e.g., knowledge, dialog). We release two variants of CodeActAgent:\n- CodeActAgent-Mistral-7b-v0.1 (recommended, model link): using Mistral-7b-v0.1 as the base model with 32k context window.\n- CodeActAgent-Llama-7b (model link): using Llama-2-7b as the base model with 4k context window.\n\n!Model Performance\n*Evaluation results for CodeActAgent. ID and OD stand for in-domain and out-of-domain evaluation correspondingly. Overall averaged performance normalizes the MT-Bench score to be consistent with other tasks and excludes in-domain tasks for fair comparison.*\n\n\nPlease check out our paper and code for more details about data collection, model training, and evaluation.",
"## Citation"
] |
b5977f5cb476c4c18f6dc4025aa8900138c750fc | # Dataset Card for "VIVOS_CommonVoice_FOSD_NoiseControl_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tuanmanh28/VIVOS_CommonVoice_FOSD_NoiseControl_dataset | [
"region:us"
] | 2024-01-15T09:16:58+00:00 | {"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2741051024.0, "num_examples": 39585}, {"name": "test", "num_bytes": 249790491.52, "num_examples": 5108}], "download_size": 2921057376, "dataset_size": 2990841515.52}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2024-01-15T09:18:34+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "VIVOS_CommonVoice_FOSD_NoiseControl_dataset"
More Information needed | [
"# Dataset Card for \"VIVOS_CommonVoice_FOSD_NoiseControl_dataset\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"VIVOS_CommonVoice_FOSD_NoiseControl_dataset\"\n\nMore Information needed"
] |
f3fb708fee2352a6643c3a81140ad2493c072a98 |
# Dataset of ebisu_eika (Touhou)
This is the dataset of ebisu_eika (Touhou), containing 132 images and their tags.
The core tags of this character are `bangs, long_hair, red_eyes, blonde_hair`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:-------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 132 | 122.76 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ebisu_eika_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 132 | 81.15 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ebisu_eika_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 266 | 155.30 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ebisu_eika_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 132 | 112.74 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ebisu_eika_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 266 | 196.82 MiB | [Download](https://huggingface.co/datasets/CyberHarem/ebisu_eika_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/ebisu_eika_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 8 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, barefoot, frilled_shirt, frilled_skirt, full_body, long_earlobes, looking_at_viewer, puffy_short_sleeves, skirt_set, solo, white_shirt, white_skirt, blouse, brown_eyes, rock, simple_background, sitting, stone, white_background, dark-skinned_female, open_mouth, toes, :d, blush_stickers, feet, medium_hair |
| 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, long_earlobes, open_mouth, puffy_short_sleeves, solo, white_shirt, frilled_shirt, looking_at_viewer, rock, stone, white_skirt, :d, blush, holding, jellyfish, skirt_set, upper_body |
| 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, long_earlobes, puffy_short_sleeves, solo, upper_body, dress, open_mouth, simple_background, white_shirt, looking_at_viewer, white_background, blush_stickers, brown_eyes, grey_hair |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | barefoot | frilled_shirt | frilled_skirt | full_body | long_earlobes | looking_at_viewer | puffy_short_sleeves | skirt_set | solo | white_shirt | white_skirt | blouse | brown_eyes | rock | simple_background | sitting | stone | white_background | dark-skinned_female | open_mouth | toes | :d | blush_stickers | feet | medium_hair | blush | holding | jellyfish | upper_body | dress | grey_hair |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-----------|:----------------|:----------------|:------------|:----------------|:--------------------|:----------------------|:------------|:-------|:--------------|:--------------|:---------|:-------------|:-------|:--------------------|:----------|:--------|:-------------------|:----------------------|:-------------|:-------|:-----|:-----------------|:-------|:--------------|:--------|:----------|:------------|:-------------|:--------|:------------|
| 0 | 8 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | |
| 1 | 5 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | | X | | | X | X | X | X | X | X | X | | | X | | | X | | | X | | X | | | | X | X | X | X | | |
| 2 | 5 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | | | | | X | X | X | | X | X | | | X | | X | | | X | | X | | | X | | | | | | X | X | X |
| CyberHarem/ebisu_eika_touhou | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T09:19:19+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T09:46:59+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of ebisu\_eika (Touhou)
===============================
This is the dataset of ebisu\_eika (Touhou), containing 132 images and their tags.
The core tags of this character are 'bangs, long\_hair, red\_eyes, blonde\_hair', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
e9520a30715abd039dcdac1a0c1a51337da61fef |
# Dataset of meira (Touhou)
This is the dataset of meira (Touhou), containing 77 images and their tags.
The core tags of this character are `purple_hair, ponytail, long_hair, purple_eyes, ribbon, hair_ribbon, bangs`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:--------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 77 | 63.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/meira_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 77 | 43.56 MiB | [Download](https://huggingface.co/datasets/CyberHarem/meira_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 136 | 75.45 MiB | [Download](https://huggingface.co/datasets/CyberHarem/meira_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 77 | 58.72 MiB | [Download](https://huggingface.co/datasets/CyberHarem/meira_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 136 | 98.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/meira_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/meira_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 30 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, katana, japanese_clothes, solo, sheath |
| 1 | 11 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, holding_sword, katana, long_sleeves, looking_at_viewer, solo, wide_sleeves, white_ribbon, closed_mouth, pants, simple_background, very_long_hair, white_background, white_kimono, full_body, hakama, sheath |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | katana | japanese_clothes | solo | sheath | holding_sword | long_sleeves | looking_at_viewer | wide_sleeves | white_ribbon | closed_mouth | pants | simple_background | very_long_hair | white_background | white_kimono | full_body | hakama |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:---------|:-------------------|:-------|:---------|:----------------|:---------------|:--------------------|:---------------|:---------------|:---------------|:--------|:--------------------|:-----------------|:-------------------|:---------------|:------------|:---------|
| 0 | 30 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | | | | | | | | | | | | | |
| 1 | 11 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | X | X | X | X | X | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/meira_touhou | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T09:19:21+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T09:41:58+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of meira (Touhou)
=========================
This is the dataset of meira (Touhou), containing 77 images and their tags.
The core tags of this character are 'purple\_hair, ponytail, long\_hair, purple\_eyes, ribbon, hair\_ribbon, bangs', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
269154dc31b124335622cb0a37c38a0a878940b3 | # Dataset Card for "myriade_ontologie"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | gguichard/myriade_ontologie | [
"region:us"
] | 2024-01-15T09:26:57+00:00 | {"dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "wn_sens", "sequence": "int64"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 13863915, "num_examples": 43590}], "download_size": 0, "dataset_size": 13863915}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-23T08:03:15+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "myriade_ontologie"
More Information needed | [
"# Dataset Card for \"myriade_ontologie\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"myriade_ontologie\"\n\nMore Information needed"
] |
f962690d562f21428279f11094b81ec32fd5f4e0 |
# ShareGPT4 Dataset
ShareGPT4 is a cleaned version of the OpenChat-ShareGPT4 Dataset, designed for training conversational AI models. This dataset contains a collection of conversations, with each conversation consisting of two main features: role and value.
## Dataset Info
- **Features**:
- **conversations**:
- **role** (string): The role of the speaker in the conversation.
- **value** (string): The actual conversation text.
- **Splits**:
- **train**:
- Number of examples: 6144
- Size: 30,322,763 bytes
- **Download Size**: 15,605,374 bytes
- **Dataset Size**: 30,322,763 bytes
## Configs
- **Config Name**: default
- **Data Files**:
- **split**: train
- **path**: data/train-*
For more information on how to use this dataset with the Hugging Face library, please refer to their documentation. | erfanzar/ShareGPT4 | [
"region:us"
] | 2024-01-15T09:30:01+00:00 | {"dataset_info": {"features": [{"name": "conversations", "list": [{"name": "role", "dtype": "string"}, {"name": "value", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 30322763, "num_examples": 6144}], "download_size": 15605374, "dataset_size": 30322763}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-18T17:52:38+00:00 | [] | [] | TAGS
#region-us
|
# ShareGPT4 Dataset
ShareGPT4 is a cleaned version of the OpenChat-ShareGPT4 Dataset, designed for training conversational AI models. This dataset contains a collection of conversations, with each conversation consisting of two main features: role and value.
## Dataset Info
- Features:
- conversations:
- role (string): The role of the speaker in the conversation.
- value (string): The actual conversation text.
- Splits:
- train:
- Number of examples: 6144
- Size: 30,322,763 bytes
- Download Size: 15,605,374 bytes
- Dataset Size: 30,322,763 bytes
## Configs
- Config Name: default
- Data Files:
- split: train
- path: data/train-*
For more information on how to use this dataset with the Hugging Face library, please refer to their documentation. | [
"# ShareGPT4 Dataset\n\nShareGPT4 is a cleaned version of the OpenChat-ShareGPT4 Dataset, designed for training conversational AI models. This dataset contains a collection of conversations, with each conversation consisting of two main features: role and value.",
"## Dataset Info\n\n- Features:\n - conversations:\n - role (string): The role of the speaker in the conversation.\n - value (string): The actual conversation text.\n- Splits:\n - train:\n - Number of examples: 6144\n - Size: 30,322,763 bytes\n- Download Size: 15,605,374 bytes\n- Dataset Size: 30,322,763 bytes",
"## Configs\n\n- Config Name: default\n- Data Files:\n - split: train\n - path: data/train-*\n\nFor more information on how to use this dataset with the Hugging Face library, please refer to their documentation."
] | [
"TAGS\n#region-us \n",
"# ShareGPT4 Dataset\n\nShareGPT4 is a cleaned version of the OpenChat-ShareGPT4 Dataset, designed for training conversational AI models. This dataset contains a collection of conversations, with each conversation consisting of two main features: role and value.",
"## Dataset Info\n\n- Features:\n - conversations:\n - role (string): The role of the speaker in the conversation.\n - value (string): The actual conversation text.\n- Splits:\n - train:\n - Number of examples: 6144\n - Size: 30,322,763 bytes\n- Download Size: 15,605,374 bytes\n- Dataset Size: 30,322,763 bytes",
"## Configs\n\n- Config Name: default\n- Data Files:\n - split: train\n - path: data/train-*\n\nFor more information on how to use this dataset with the Hugging Face library, please refer to their documentation."
] |
a5a26c8bf7c36ccb0870e9bb94474c021fcfd757 | # Dataset Card for "VietnameseNewsparquet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tmnam20/Vietnamese-News | [
"region:us"
] | 2024-01-15T09:30:14+00:00 | {"dataset_info": [{"config_name": "all", "features": [{"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 23505962013, "num_examples": 2421826}], "download_size": 10986340753, "dataset_size": 23505962013}, {"config_name": "baochinhphu", "features": [{"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 733982734, "num_examples": 58400}], "download_size": 312699305, "dataset_size": 733982734}, {"config_name": "dantri", "features": [{"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1265117393, "num_examples": 100396}], "download_size": 551235606, "dataset_size": 1265117393}, {"config_name": "laodong", "features": [{"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2939780592, "num_examples": 392668}], "download_size": 0, "dataset_size": 2939780592}, {"config_name": "qdnd", "features": [{"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2731532774, "num_examples": 259691}], "download_size": 0, "dataset_size": 2731532774}, {"config_name": "vietnamnet", "features": [{"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14103390400, "num_examples": 1444898}], "download_size": 6773926864, "dataset_size": 14103390400}, {"config_name": "vnexpress", "features": [{"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1235989143, "num_examples": 133438}], "download_size": 537754843, "dataset_size": 1235989143}, {"config_name": "vtc", "features": [{"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 230258605, "num_examples": 10440}], "download_size": 66975140, "dataset_size": 230258605}, {"config_name": "zingnews", "features": [{"name": "title", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 265910372, "num_examples": 21895}], "download_size": 124252870, "dataset_size": 265910372}], "configs": [{"config_name": "all", "data_files": [{"split": "train", "path": "all/train-*"}]}, {"config_name": "baochinhphu", "data_files": [{"split": "train", "path": "baochinhphu/train-*"}]}, {"config_name": "dantri", "data_files": [{"split": "train", "path": "dantri/train-*"}]}, {"config_name": "laodong", "data_files": [{"split": "train", "path": "laodong/train-*"}]}, {"config_name": "qdnd", "data_files": [{"split": "train", "path": "qdnd/train-*"}]}, {"config_name": "vietnamnet", "data_files": [{"split": "train", "path": "vietnamnet/train-*"}]}, {"config_name": "vnexpress", "data_files": [{"split": "train", "path": "vnexpress/train-*"}]}, {"config_name": "vtc", "data_files": [{"split": "train", "path": "vtc/train-*"}]}, {"config_name": "zingnews", "data_files": [{"split": "train", "path": "zingnews/train-*"}]}]} | 2024-01-16T06:48:40+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "VietnameseNewsparquet"
More Information needed | [
"# Dataset Card for \"VietnameseNewsparquet\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"VietnameseNewsparquet\"\n\nMore Information needed"
] |
6b143af444900f125e011165f2dbebcd669027b9 |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | abhika-m/fava-flagged-demo | [
"region:us"
] | 2024-01-15T09:31:53+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data.csv"}]}]} | 2024-02-17T14:07:32+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Dataset Name
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
565bd237380654d23cc18caaa1a71cee73160af2 | # Dataset Card for "quality_counter_512"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | AIRI-NLP/quality_counter_512 | [
"region:us"
] | 2024-01-15T09:32:22+00:00 | {"dataset_info": {"features": [{"name": "context", "dtype": "string"}, {"name": "word", "dtype": "string"}, {"name": "claim", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 64206836, "num_examples": 2640}, {"name": "validation", "num_bytes": 18498688, "num_examples": 740}, {"name": "test", "num_bytes": 56239972, "num_examples": 2300}], "download_size": 4494295, "dataset_size": 138945496}} | 2024-01-15T09:32:29+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quality_counter_512"
More Information needed | [
"# Dataset Card for \"quality_counter_512\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quality_counter_512\"\n\nMore Information needed"
] |
855b94e901261cbb536a1ef3f0e26c36f006b1b8 |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | davanstrien/fake-gated-dataset | [
"region:us"
] | 2024-01-15T09:50:13+00:00 | {"extra_gated_prompt": "You agree to not use the dataset to conduct experiments that cause harm to human subjects.", "extra_gated_fields": {"Full Name": "text", "Email": "text", "Researcher Google Scholar Page": "text", "I understand that this Dataset and the videos are protected by copyrights": "checkbox", "I agree to use this dataset for non-commercial use ONLY": "checkbox"}} | 2024-01-15T09:54:14+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
396fc847240e2298d527e4e96bc521dbe0f49f9b | # Spanish Passage Retrieval
This repository provides data from https://mklab.iti.gr/results/spanish-passage-retrieval-dataset/ as a HF dataset.
The data is not present in the repository but is downloaded on the fly.
There is an S2S (retrieve passages/sentences that are marked as relevant) and an
S2P (retrieve documents that contain relevant passages/sentences) version of the retrieval task. The respective corpuses are called `'corpus.sentences'` and `'corpus.documents'`.
The qrel data is contained in `'qrels.s2s'` and `'qrels.s2p'`, which hold space-separated lists of relevant documents. | jinaai/spanish_passage_retrieval | [
"region:eu"
] | 2024-01-15T10:08:21+00:00 | {} | 2024-01-18T11:28:44+00:00 | [] | [] | TAGS
#region-eu
| # Spanish Passage Retrieval
This repository provides data from URL as a HF dataset.
The data is not present in the repository but is downloaded on the fly.
There is an S2S (retrieve passages/sentences that are marked as relevant) and an
S2P (retrieve documents that contain relevant passages/sentences) version of the retrieval task. The respective corpuses are called ''corpus.sentences'' and ''corpus.documents''.
The qrel data is contained in ''qrels.s2s'' and ''qrels.s2p'', which hold space-separated lists of relevant documents. | [
"# Spanish Passage Retrieval\nThis repository provides data from URL as a HF dataset. \nThe data is not present in the repository but is downloaded on the fly. \n\nThere is an S2S (retrieve passages/sentences that are marked as relevant) and an \nS2P (retrieve documents that contain relevant passages/sentences) version of the retrieval task. The respective corpuses are called ''corpus.sentences'' and ''corpus.documents''.\nThe qrel data is contained in ''qrels.s2s'' and ''qrels.s2p'', which hold space-separated lists of relevant documents."
] | [
"TAGS\n#region-eu \n",
"# Spanish Passage Retrieval\nThis repository provides data from URL as a HF dataset. \nThe data is not present in the repository but is downloaded on the fly. \n\nThere is an S2S (retrieve passages/sentences that are marked as relevant) and an \nS2P (retrieve documents that contain relevant passages/sentences) version of the retrieval task. The respective corpuses are called ''corpus.sentences'' and ''corpus.documents''.\nThe qrel data is contained in ''qrels.s2s'' and ''qrels.s2p'', which hold space-separated lists of relevant documents."
] |
0d61a3eb2087c21f4f63f199bca5f225ddaf03ac | # TMMLU+ : Large scale traditional chinese massive multitask language understanding
<p align="center">
<img src="https://huggingface.co/datasets/ikala/tmmluplus/resolve/main/cover.png" alt="A close-up image of a neat paper note with a white background. The text 'TMMLU+' is written horizontally across the center of the note in bold, black. Join us to work in multimodal LLM : https://ikala.ai/recruit/" style="max-width: 400" width=400 />
</p>
We present TMMLU+, a traditional Chinese massive multitask language understanding dataset. TMMLU+ is a multiple-choice question-answering dataset featuring 66 subjects, ranging from elementary to professional level.
The TMMLU+ dataset is six times larger and contains more balanced subjects compared to its predecessor, [TMMLU](https://github.com/mtkresearch/MR-Models/tree/main/TC-Eval/data/TMMLU). We have included benchmark results in TMMLU+ from closed-source models and 20 open-weight Chinese large language models, with parameters ranging from 1.8B to 72B. The benchmark results show that Traditional Chinese variants still lag behind those trained on major Simplified Chinese models.
```python
from datasets import load_dataset
task_list = [
'engineering_math', 'dentistry', 'traditional_chinese_medicine_clinical_medicine', 'clinical_psychology', 'technical', 'culinary_skills', 'mechanical', 'logic_reasoning', 'real_estate',
'general_principles_of_law', 'finance_banking', 'anti_money_laundering', 'ttqav2', 'marketing_management', 'business_management', 'organic_chemistry', 'advance_chemistry',
'physics', 'secondary_physics', 'human_behavior', 'national_protection', 'jce_humanities', 'politic_science', 'agriculture', 'official_document_management',
'financial_analysis', 'pharmacy', 'educational_psychology', 'statistics_and_machine_learning', 'management_accounting', 'introduction_to_law', 'computer_science', 'veterinary_pathology',
'accounting', 'fire_science', 'optometry', 'insurance_studies', 'pharmacology', 'taxation', 'trust_practice', 'geography_of_taiwan', 'physical_education', 'auditing', 'administrative_law',
'education_(profession_level)', 'economics', 'veterinary_pharmacology', 'nautical_science', 'occupational_therapy_for_psychological_disorders',
'basic_medical_science', 'macroeconomics', 'trade', 'chinese_language_and_literature', 'tve_design', 'junior_science_exam', 'junior_math_exam', 'junior_chinese_exam',
'junior_social_studies', 'tve_mathematics', 'tve_chinese_language', 'tve_natural_sciences', 'junior_chemistry', 'music', 'education', 'three_principles_of_people',
'taiwanese_hokkien',
'linear_algebra'
]
for task in task_list:
val = load_dataset('ZoneTwelve/tmmluplus', task)['validation']
dev = load_dataset('ZoneTwelve/tmmluplus', task)['train']
test = load_dataset('ZoneTwelve/tmmluplus', task)['test']
```
For each dataset split
```python
for row in test:
print(row)
break
>> Dataset({
features: ['question', 'A', 'B', 'C', 'D', 'answer'],
num_rows: 11
})
```
Statistic on all four categories : STEM, Social Science, Humanities, Other
| Category | Test | Dev | Validation |
|----------------------------------|-------|------|------------|
| STEM | 3458 | 70 | 385 |
| Social Sciences | 5958 | 90 | 665 |
| Humanities | 1763 | 35 | 197 |
| Other (Business, Health, Misc.) | 8939 | 135 | 995 |
| **Total** | 20118 | 330 | 2242 |
## Benchmark on direct prompting
| model | STEM | Social Science | Humanities | Other | Average |
|------------|------------|------------|------------|------------|------------|
| [Qwen/Qwen-72B](https://huggingface.co/Qwen/Qwen-72B) | 61.12 | 71.65 | 63.00 | 61.31 |64.27|
| gpt-4-0613 | 60.36 | 67.36 | 56.03 | 57.62 |60.34|
| [Qwen/Qwen-72B-Chat](https://huggingface.co/Qwen/Qwen-72B-Chat) | 55.15 | 66.20 | 55.65 | 57.19 |58.55|
| [Qwen/Qwen-14B](https://huggingface.co/Qwen/Qwen-14B) | 46.94 | 56.69 | 49.43 | 48.81 |50.47|
| Gemini-pro | 45.38 | 57.29 | 48.80 | 48.21 |49.92|
| [01-ai/Yi-34B-Chat](https://huggingface.co/01-ai/Yi-34B-Chat) | 40.24 | 56.77 | 53.99 | 47.58 |49.64|
| [Qwen/Qwen-14B-Chat](https://huggingface.co/Qwen/Qwen-14B-Chat) | 43.86 | 53.29 | 44.78 | 45.13 |46.77|
| [01-ai/Yi-6B-Chat](https://huggingface.co/01-ai/Yi-6B-Chat) | 39.62 | 50.24 | 44.44 | 44.26 |44.64|
| Claude-1.3 | 42.65 | 49.33 | 42.16 | 44.14 |44.57|
| gpt-3.5-turbo-0613 | 41.56 | 46.72 | 36.73 | 42.03 |41.76|
| [CausalLM/14B](https://huggingface.co/CausalLM/14B) | 39.83 | 44.50 | 39.61 | 41.97 |41.48|
| [Skywork/Skywork-13B-base](https://huggingface.co/Skywork/Skywork-13B-base) | 36.93 | 47.27 | 41.04 | 40.10 |41.33|
| [Qwen/Qwen-7B](https://huggingface.co/Qwen/Qwen-7B) | 37.53 | 45.48 | 38.09 | 38.96 |40.01|
| [Qwen/Qwen-7B-Chat](https://huggingface.co/Qwen/Qwen-7B-Chat) | 33.32 | 44.64 | 40.27 | 39.89 |39.53|
| [vivo-ai/BlueLM-7B-Base](https://huggingface.co/vivo-ai/BlueLM-7B-Base) | 33.94 | 41.52 | 37.38 | 38.74 |37.90|
| [baichuan-inc/Baichuan2-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat) | 29.64 | 43.73 | 37.36 | 39.88 |37.65|
| [Qwen/Qwen-1_8B](https://huggingface.co/Qwen/Qwen-1_8B) | 32.65 | 38.95 | 38.34 | 35.27 |36.30|
| Claude-2 | 39.65 | 39.09 | 28.59 | 37.47 |36.20|
| [THUDM/chatglm3-6b](https://huggingface.co/THUDM/chatglm3-6b) | 31.05 | 39.31 | 35.64 | 35.60 |35.40|
| [deepseek-ai/deepseek-llm-7b-chat](https://huggingface.co/deepseek-ai/deepseek-llm-7b-chat) | 29.82 | 42.29 | 34.24 | 34.31 |35.17|
| [CausalLM/7B](https://huggingface.co/CausalLM/7B) | 31.03 | 38.17 | 35.87 | 35.39 |35.11|
| [Azure99/blossom-v3_1-mistral-7b](https://huggingface.co/Azure99/blossom-v3_1-mistral-7b) | 32.80 | 36.91 | 32.36 | 34.53 |34.15|
| [microsoft/Orca-2-13b](https://huggingface.co/microsoft/Orca-2-13b) | 24.69 | 39.18 | 33.60 | 31.99 |32.37|
| [Qwen/Qwen-1_8B-Chat](https://huggingface.co/Qwen/Qwen-1_8B-Chat) | 26.60 | 36.36 | 31.81 | 31.96 |31.68|
| [TigerResearch/tigerbot-13b-chat-v3](https://huggingface.co/TigerResearch/tigerbot-13b-chat-v3) | 24.73 | 29.63 | 25.72 | 27.22 |26.82|
| [hongyin/mistral-7b-80k](https://huggingface.co/hongyin/mistral-7b-80k) | 24.26 | 23.76 | 22.56 | 24.57 |23.79|
| [deepseek-ai/deepseek-llm-67b-chat](https://huggingface.co/deepseek-ai/deepseek-llm-67b-chat) | 19.10 | 26.06 | 21.51 | 21.77 |22.11|
| [yentinglin/Taiwan-LLM-13B-v2.0-chat](https://huggingface.co/yentinglin/Taiwan-LLM-13B-v2.0-chat) | 18.53 | 27.65 | 17.77 | 21.49 |21.36|
| [GeneZC/MiniChat-3B](https://huggingface.co/GeneZC/MiniChat-3B) | 17.66 | 23.35 | 22.71 | 20.34 |21.02|
| [LinkSoul/Chinese-Llama-2-7b](https://huggingface.co/LinkSoul/Chinese-Llama-2-7b) | 16.55 | 18.39 | 12.97 | 16.13 |16.01|
| [yentinglin/Taiwan-LLM-7B-v2.1-chat](https://huggingface.co/yentinglin/Taiwan-LLM-7B-v2.1-chat) | 14.99 | 16.23 | 15.00 | 16.22 |15.61|
| Claude-instant-1 | 12.52 | 17.13 | 15.10 | 13.57 |14.58|
| [FlagAlpha/Atom-7B](https://huggingface.co/FlagAlpha/Atom-7B) | 5.60 | 13.57 | 7.71 | 11.84 |9.68|
Results via [ievals](https://github.com/iKala/ievals) ( settings : 0-shot direct answering )
# Citation
```
@article{ikala2023eval,
title={An Improved Traditional Chinese Evaluation Suite for Foundation Model},
author={Tam, Zhi-Rui and Pai, Ya-Ting},
journal={arXiv},
year={2023}
}
```
> CONTENT WARNING
> This is a modification of ikala/tmmluplus, with minor alterations made to facilitate the implementation for lm-evaluation-harness purposes.
> [More details on Discussions](https://huggingface.co/datasets/ZoneTwelve/tmmluplus/discussions/1) | ZoneTwelve/tmmluplus | [
"task_categories:question-answering",
"size_categories:100K<n<1M",
"language:zh",
"license:other",
"traditional chinese",
"finance",
"medical",
"taiwan",
"benchmark",
"zh-tw",
"zh-hant",
"region:us"
] | 2024-01-15T10:09:59+00:00 | {"language": ["zh"], "license": "other", "size_categories": ["100K<n<1M"], "task_categories": ["question-answering"], "pretty_name": "tmmlu++", "license_name": "creative-commons-by-nc", "tags": ["traditional chinese", "finance", "medical", "taiwan", "benchmark", "zh-tw", "zh-hant"], "configs": [{"config_name": "engineering_math", "datafiles": [{"split": "train", "path": "data/engineering_math_dev.csv"}, {"split": "validation", "path": "data/engineering_math_val.csv"}, {"split": "test", "path": "data/engineering_math_test.csv"}]}, {"config_name": "dentistry", "datafiles": [{"split": "train", "path": "data/dentistry_dev.csv"}, {"split": "validation", "path": "data/dentistry_val.csv"}, {"split": "test", "path": "data/dentistry_test.csv"}]}, {"config_name": "traditional_chinese_medicine_clinical_medicine", "datafiles": [{"split": "train", "path": "data/traditional_chinese_medicine_clinical_medicine_dev.csv"}, {"split": "validation", "path": "data/traditional_chinese_medicine_clinical_medicine_val.csv"}, {"split": "test", "path": "data/traditional_chinese_medicine_clinical_medicine_test.csv"}]}, {"config_name": "clinical_psychology", "datafiles": [{"split": "train", "path": "data/clinical_psychology_dev.csv"}, {"split": "validation", "path": "data/clinical_psychology_val.csv"}, {"split": "test", "path": "data/clinical_psychology_test.csv"}]}, {"config_name": "technical", "datafiles": [{"split": "train", "path": "data/technical_dev.csv"}, {"split": "validation", "path": "data/technical_val.csv"}, {"split": "test", "path": "data/technical_test.csv"}]}, {"config_name": "culinary_skills", "datafiles": [{"split": "train", "path": "data/culinary_skills_dev.csv"}, {"split": "validation", "path": "data/culinary_skills_val.csv"}, {"split": "test", "path": "data/culinary_skills_test.csv"}]}, {"config_name": "mechanical", "datafiles": [{"split": "train", "path": "data/mechanical_dev.csv"}, {"split": "validation", "path": "data/mechanical_val.csv"}, {"split": "test", "path": "data/mechanical_test.csv"}]}, {"config_name": "logic_reasoning", "datafiles": [{"split": "train", "path": "data/logic_reasoning_dev.csv"}, {"split": "validation", "path": "data/logic_reasoning_val.csv"}, {"split": "test", "path": "data/logic_reasoning_test.csv"}]}, {"config_name": "real_estate", "datafiles": [{"split": "train", "path": "data/real_estate_dev.csv"}, {"split": "validation", "path": "data/real_estate_val.csv"}, {"split": "test", "path": "data/real_estate_test.csv"}]}, {"config_name": "general_principles_of_law", "datafiles": [{"split": "train", "path": "data/general_principles_of_law_dev.csv"}, {"split": "validation", "path": "data/general_principles_of_law_val.csv"}, {"split": "test", "path": "data/general_principles_of_law_test.csv"}]}, {"config_name": "finance_banking", "datafiles": [{"split": "train", "path": "data/finance_banking_dev.csv"}, {"split": "validation", "path": "data/finance_banking_val.csv"}, {"split": "test", "path": "data/finance_banking_test.csv"}]}, {"config_name": "anti_money_laundering", "datafiles": [{"split": "train", "path": "data/anti_money_laundering_dev.csv"}, {"split": "validation", "path": "data/anti_money_laundering_val.csv"}, {"split": "test", "path": "data/anti_money_laundering_test.csv"}]}, {"config_name": "ttqav2", "datafiles": [{"split": "train", "path": "data/ttqav2_dev.csv"}, {"split": "validation", "path": "data/ttqav2_val.csv"}, {"split": "test", "path": "data/ttqav2_test.csv"}]}, {"config_name": "marketing_management", "datafiles": [{"split": "train", "path": "data/marketing_management_dev.csv"}, {"split": "validation", "path": "data/marketing_management_val.csv"}, {"split": "test", "path": "data/marketing_management_test.csv"}]}, {"config_name": "business_management", "datafiles": [{"split": "train", "path": "data/business_management_dev.csv"}, {"split": "validation", "path": "data/business_management_val.csv"}, {"split": "test", "path": "data/business_management_test.csv"}]}, {"config_name": "organic_chemistry", "datafiles": [{"split": "train", "path": "data/organic_chemistry_dev.csv"}, {"split": "validation", "path": "data/organic_chemistry_val.csv"}, {"split": "test", "path": "data/organic_chemistry_test.csv"}]}, {"config_name": "advance_chemistry", "datafiles": [{"split": "train", "path": "data/advance_chemistry_dev.csv"}, {"split": "validation", "path": "data/advance_chemistry_val.csv"}, {"split": "test", "path": "data/advance_chemistry_test.csv"}]}, {"config_name": "physics", "datafiles": [{"split": "train", "path": "data/physics_dev.csv"}, {"split": "validation", "path": "data/physics_val.csv"}, {"split": "test", "path": "data/physics_test.csv"}]}, {"config_name": "secondary_physics", "datafiles": [{"split": "train", "path": "data/secondary_physics_dev.csv"}, {"split": "validation", "path": "data/secondary_physics_val.csv"}, {"split": "test", "path": "data/secondary_physics_test.csv"}]}, {"config_name": "human_behavior", "datafiles": [{"split": "train", "path": "data/human_behavior_dev.csv"}, {"split": "validation", "path": "data/human_behavior_val.csv"}, {"split": "test", "path": "data/human_behavior_test.csv"}]}, {"config_name": "national_protection", "datafiles": [{"split": "train", "path": "data/national_protection_dev.csv"}, {"split": "validation", "path": "data/national_protection_val.csv"}, {"split": "test", "path": "data/national_protection_test.csv"}]}, {"config_name": "jce_humanities", "datafiles": [{"split": "train", "path": "data/jce_humanities_dev.csv"}, {"split": "validation", "path": "data/jce_humanities_val.csv"}, {"split": "test", "path": "data/jce_humanities_test.csv"}]}, {"config_name": "politic_science", "datafiles": [{"split": "train", "path": "data/politic_science_dev.csv"}, {"split": "validation", "path": "data/politic_science_val.csv"}, {"split": "test", "path": "data/politic_science_test.csv"}]}, {"config_name": "agriculture", "datafiles": [{"split": "train", "path": "data/agriculture_dev.csv"}, {"split": "validation", "path": "data/agriculture_val.csv"}, {"split": "test", "path": "data/agriculture_test.csv"}]}, {"config_name": "official_document_management", "datafiles": [{"split": "train", "path": "data/official_document_management_dev.csv"}, {"split": "validation", "path": "data/official_document_management_val.csv"}, {"split": "test", "path": "data/official_document_management_test.csv"}]}, {"config_name": "financial_analysis", "datafiles": [{"split": "train", "path": "data/financial_analysis_dev.csv"}, {"split": "validation", "path": "data/financial_analysis_val.csv"}, {"split": "test", "path": "data/financial_analysis_test.csv"}]}, {"config_name": "pharmacy", "datafiles": [{"split": "train", "path": "data/pharmacy_dev.csv"}, {"split": "validation", "path": "data/pharmacy_val.csv"}, {"split": "test", "path": "data/pharmacy_test.csv"}]}, {"config_name": "educational_psychology", "datafiles": [{"split": "train", "path": "data/educational_psychology_dev.csv"}, {"split": "validation", "path": "data/educational_psychology_val.csv"}, {"split": "test", "path": "data/educational_psychology_test.csv"}]}, {"config_name": "statistics_and_machine_learning", "datafiles": [{"split": "train", "path": "data/statistics_and_machine_learning_dev.csv"}, {"split": "validation", "path": "data/statistics_and_machine_learning_val.csv"}, {"split": "test", "path": "data/statistics_and_machine_learning_test.csv"}]}, {"config_name": "management_accounting", "datafiles": [{"split": "train", "path": "data/management_accounting_dev.csv"}, {"split": "validation", "path": "data/management_accounting_val.csv"}, {"split": "test", "path": "data/management_accounting_test.csv"}]}, {"config_name": "introduction_to_law", "datafiles": [{"split": "train", "path": "data/introduction_to_law_dev.csv"}, {"split": "validation", "path": "data/introduction_to_law_val.csv"}, {"split": "test", "path": "data/introduction_to_law_test.csv"}]}, {"config_name": "computer_science", "datafiles": [{"split": "train", "path": "data/computer_science_dev.csv"}, {"split": "validation", "path": "data/computer_science_val.csv"}, {"split": "test", "path": "data/computer_science_test.csv"}]}, {"config_name": "veterinary_pathology", "datafiles": [{"split": "train", "path": "data/veterinary_pathology_dev.csv"}, {"split": "validation", "path": "data/veterinary_pathology_val.csv"}, {"split": "test", "path": "data/veterinary_pathology_test.csv"}]}, {"config_name": "accounting", "datafiles": [{"split": "train", "path": "data/accounting_dev.csv"}, {"split": "validation", "path": "data/accounting_val.csv"}, {"split": "test", "path": "data/accounting_test.csv"}]}, {"config_name": "fire_science", "datafiles": [{"split": "train", "path": "data/fire_science_dev.csv"}, {"split": "validation", "path": "data/fire_science_val.csv"}, {"split": "test", "path": "data/fire_science_test.csv"}]}, {"config_name": "optometry", "datafiles": [{"split": "train", "path": "data/optometry_dev.csv"}, {"split": "validation", "path": "data/optometry_val.csv"}, {"split": "test", "path": "data/optometry_test.csv"}]}, {"config_name": "insurance_studies", "datafiles": [{"split": "train", "path": "data/insurance_studies_dev.csv"}, {"split": "validation", "path": "data/insurance_studies_val.csv"}, {"split": "test", "path": "data/insurance_studies_test.csv"}]}, {"config_name": "pharmacology", "datafiles": [{"split": "train", "path": "data/pharmacology_dev.csv"}, {"split": "validation", "path": "data/pharmacology_val.csv"}, {"split": "test", "path": "data/pharmacology_test.csv"}]}, {"config_name": "taxation", "datafiles": [{"split": "train", "path": "data/taxation_dev.csv"}, {"split": "validation", "path": "data/taxation_val.csv"}, {"split": "test", "path": "data/taxation_test.csv"}]}, {"config_name": "trust_practice", "datafiles": [{"split": "train", "path": "data/trust_practice_dev.csv"}, {"split": "validation", "path": "data/trust_practice_val.csv"}, {"split": "test", "path": "data/trust_practice_test.csv"}]}, {"config_name": "geography_of_taiwan", "datafiles": [{"split": "train", "path": "data/geography_of_taiwan_dev.csv"}, {"split": "validation", "path": "data/geography_of_taiwan_val.csv"}, {"split": "test", "path": "data/geography_of_taiwan_test.csv"}]}, {"config_name": "physical_education", "datafiles": [{"split": "train", "path": "data/physical_education_dev.csv"}, {"split": "validation", "path": "data/physical_education_val.csv"}, {"split": "test", "path": "data/physical_education_test.csv"}]}, {"config_name": "auditing", "datafiles": [{"split": "train", "path": "data/auditing_dev.csv"}, {"split": "validation", "path": "data/auditing_val.csv"}, {"split": "test", "path": "data/auditing_test.csv"}]}, {"config_name": "administrative_law", "datafiles": [{"split": "train", "path": "data/administrative_law_dev.csv"}, {"split": "validation", "path": "data/administrative_law_val.csv"}, {"split": "test", "path": "data/administrative_law_test.csv"}]}, {"config_name": "education_(profession_level)", "datafiles": [{"split": "train", "path": "data/education_(profession_level)_dev.csv"}, {"split": "validation", "path": "data/education_(profession_level)_val.csv"}, {"split": "test", "path": "data/education_(profession_level)_test.csv"}]}, {"config_name": "economics", "datafiles": [{"split": "train", "path": "data/economics_dev.csv"}, {"split": "validation", "path": "data/economics_val.csv"}, {"split": "test", "path": "data/economics_test.csv"}]}, {"config_name": "veterinary_pharmacology", "datafiles": [{"split": "train", "path": "data/veterinary_pharmacology_dev.csv"}, {"split": "validation", "path": "data/veterinary_pharmacology_val.csv"}, {"split": "test", "path": "data/veterinary_pharmacology_test.csv"}]}, {"config_name": "nautical_science", "datafiles": [{"split": "train", "path": "data/nautical_science_dev.csv"}, {"split": "validation", "path": "data/nautical_science_val.csv"}, {"split": "test", "path": "data/nautical_science_test.csv"}]}, {"config_name": "occupational_therapy_for_psychological_disorders", "datafiles": [{"split": "train", "path": "data/occupational_therapy_for_psychological_disorders_dev.csv"}, {"split": "validation", "path": "data/occupational_therapy_for_psychological_disorders_val.csv"}, {"split": "test", "path": "data/occupational_therapy_for_psychological_disorders_test.csv"}]}, {"config_name": "basic_medical_science", "datafiles": [{"split": "train", "path": "data/basic_medical_science_dev.csv"}, {"split": "validation", "path": "data/basic_medical_science_val.csv"}, {"split": "test", "path": "data/basic_medical_science_test.csv"}]}, {"config_name": "macroeconomics", "datafiles": [{"split": "train", "path": "data/macroeconomics_dev.csv"}, {"split": "validation", "path": "data/macroeconomics_val.csv"}, {"split": "test", "path": "data/macroeconomics_test.csv"}]}, {"config_name": "trade", "datafiles": [{"split": "train", "path": "data/trade_dev.csv"}, {"split": "validation", "path": "data/trade_val.csv"}, {"split": "test", "path": "data/trade_test.csv"}]}, {"config_name": "chinese_language_and_literature", "datafiles": [{"split": "train", "path": "data/chinese_language_and_literature_dev.csv"}, {"split": "validation", "path": "data/chinese_language_and_literature_val.csv"}, {"split": "test", "path": "data/chinese_language_and_literature_test.csv"}]}, {"config_name": "tve_design", "datafiles": [{"split": "train", "path": "data/tve_design_dev.csv"}, {"split": "validation", "path": "data/tve_design_val.csv"}, {"split": "test", "path": "data/tve_design_test.csv"}]}, {"config_name": "junior_science_exam", "datafiles": [{"split": "train", "path": "data/junior_science_exam_dev.csv"}, {"split": "validation", "path": "data/junior_science_exam_val.csv"}, {"split": "test", "path": "data/junior_science_exam_test.csv"}]}, {"config_name": "junior_math_exam", "datafiles": [{"split": "train", "path": "data/junior_math_exam_dev.csv"}, {"split": "validation", "path": "data/junior_math_exam_val.csv"}, {"split": "test", "path": "data/junior_math_exam_test.csv"}]}, {"config_name": "junior_chinese_exam", "datafiles": [{"split": "train", "path": "data/junior_chinese_exam_dev.csv"}, {"split": "validation", "path": "data/junior_chinese_exam_val.csv"}, {"split": "test", "path": "data/junior_chinese_exam_test.csv"}]}, {"config_name": "junior_social_studies", "datafiles": [{"split": "train", "path": "data/junior_social_studies_dev.csv"}, {"split": "validation", "path": "data/junior_social_studies_val.csv"}, {"split": "test", "path": "data/junior_social_studies_test.csv"}]}, {"config_name": "tve_mathematics", "datafiles": [{"split": "train", "path": "data/tve_mathematics_dev.csv"}, {"split": "validation", "path": "data/tve_mathematics_val.csv"}, {"split": "test", "path": "data/tve_mathematics_test.csv"}]}, {"config_name": "tve_chinese_language", "datafiles": [{"split": "train", "path": "data/tve_chinese_language_dev.csv"}, {"split": "validation", "path": "data/tve_chinese_language_val.csv"}, {"split": "test", "path": "data/tve_chinese_language_test.csv"}]}, {"config_name": "tve_natural_sciences", "datafiles": [{"split": "train", "path": "data/tve_natural_sciences_dev.csv"}, {"split": "validation", "path": "data/tve_natural_sciences_val.csv"}, {"split": "test", "path": "data/tve_natural_sciences_test.csv"}]}, {"config_name": "junior_chemistry", "datafiles": [{"split": "train", "path": "data/junior_chemistry_dev.csv"}, {"split": "validation", "path": "data/junior_chemistry_val.csv"}, {"split": "test", "path": "data/junior_chemistry_test.csv"}]}, {"config_name": "music", "datafiles": [{"split": "train", "path": "data/music_dev.csv"}, {"split": "validation", "path": "data/music_val.csv"}, {"split": "test", "path": "data/music_test.csv"}]}, {"config_name": "education", "datafiles": [{"split": "train", "path": "data/education_dev.csv"}, {"split": "validation", "path": "data/education_val.csv"}, {"split": "test", "path": "data/education_test.csv"}]}, {"config_name": "three_principles_of_people", "datafiles": [{"split": "train", "path": "data/three_principles_of_people_dev.csv"}, {"split": "validation", "path": "data/three_principles_of_people_val.csv"}, {"split": "test", "path": "data/three_principles_of_people_test.csv"}]}, {"config_name": "taiwanese_hokkien", "datafiles": [{"split": "train", "path": "data/taiwanese_hokkien_dev.csv"}, {"split": "validation", "path": "data/taiwanese_hokkien_val.csv"}, {"split": "test", "path": "data/taiwanese_hokkien_test.csv"}]}, {"config_name": "linear_algebra", "datafiles": [{"split": "train", "path": "data/linear_algebra_dev.csv"}, {"split": "validation", "path": "data/linear_algebra_val.csv"}, {"split": "test", "path": "data/linear_algebra_test.csv"}]}]} | 2024-01-19T08:10:20+00:00 | [] | [
"zh"
] | TAGS
#task_categories-question-answering #size_categories-100K<n<1M #language-Chinese #license-other #traditional chinese #finance #medical #taiwan #benchmark #zh-tw #zh-hant #region-us
| TMMLU+ : Large scale traditional chinese massive multitask language understanding
=================================================================================
![](URL alt=)
We present TMMLU+, a traditional Chinese massive multitask language understanding dataset. TMMLU+ is a multiple-choice question-answering dataset featuring 66 subjects, ranging from elementary to professional level.
The TMMLU+ dataset is six times larger and contains more balanced subjects compared to its predecessor, TMMLU. We have included benchmark results in TMMLU+ from closed-source models and 20 open-weight Chinese large language models, with parameters ranging from 1.8B to 72B. The benchmark results show that Traditional Chinese variants still lag behind those trained on major Simplified Chinese models.
For each dataset split
Statistic on all four categories : STEM, Social Science, Humanities, Other
Benchmark on direct prompting
-----------------------------
Results via ievals ( settings : 0-shot direct answering )
>
> CONTENT WARNING
> This is a modification of ikala/tmmluplus, with minor alterations made to facilitate the implementation for lm-evaluation-harness purposes.
> More details on Discussions
>
>
>
| [] | [
"TAGS\n#task_categories-question-answering #size_categories-100K<n<1M #language-Chinese #license-other #traditional chinese #finance #medical #taiwan #benchmark #zh-tw #zh-hant #region-us \n"
] |
2ac4957a00a4d5232ea390624a679d475f71c4f8 |
# Dataset of toutetsu_yuuma (Touhou)
This is the dataset of toutetsu_yuuma (Touhou), containing 20 images and their tags.
The core tags of this character are `horns, red_eyes, ribbon, earrings, pointy_ears, short_hair, horn_ornament, red_horns, white_hair, horn_ribbon, bangs, curly_hair, sheep_horns, grey_hair, horizontal_pupils`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:-----------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 20 | 44.78 MiB | [Download](https://huggingface.co/datasets/CyberHarem/toutetsu_yuuma_touhou/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 20 | 20.67 MiB | [Download](https://huggingface.co/datasets/CyberHarem/toutetsu_yuuma_touhou/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 54 | 47.08 MiB | [Download](https://huggingface.co/datasets/CyberHarem/toutetsu_yuuma_touhou/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 20 | 37.27 MiB | [Download](https://huggingface.co/datasets/CyberHarem/toutetsu_yuuma_touhou/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 54 | 74.61 MiB | [Download](https://huggingface.co/datasets/CyberHarem/toutetsu_yuuma_touhou/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/toutetsu_yuuma_touhou',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 20 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, jewelry, solo, blue_dress, red_sleeves, sharp_teeth, looking_at_viewer, bare_shoulders, detached_sleeves, oversized_object, smile, open_mouth, spoon |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | jewelry | solo | blue_dress | red_sleeves | sharp_teeth | looking_at_viewer | bare_shoulders | detached_sleeves | oversized_object | smile | open_mouth | spoon |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:----------|:-------|:-------------|:--------------|:--------------|:--------------------|:-----------------|:-------------------|:-------------------|:--------|:-------------|:--------|
| 0 | 20 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | X |
| CyberHarem/toutetsu_yuuma_touhou | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-15T10:22:07+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-15T10:29:09+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of toutetsu\_yuuma (Touhou)
===================================
This is the dataset of toutetsu\_yuuma (Touhou), containing 20 images and their tags.
The core tags of this character are 'horns, red\_eyes, ribbon, earrings, pointy\_ears, short\_hair, horn\_ornament, red\_horns, white\_hair, horn\_ribbon, bangs, curly\_hair, sheep\_horns, grey\_hair, horizontal\_pupils', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |
73027ffcfed3121a9874efb861ea507b8a7f6e5a |
# Dataset Card for Evaluation run of sumo43/Yi-32b-x2-v2.0
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [sumo43/Yi-32b-x2-v2.0](https://huggingface.co/sumo43/Yi-32b-x2-v2.0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_sumo43__Yi-32b-x2-v2.0",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2024-01-17T11:06:51.060608](https://huggingface.co/datasets/open-llm-leaderboard/details_sumo43__Yi-32b-x2-v2.0/blob/main/results_2024-01-17T11-06-51.060608.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.7642971242141403,
"acc_stderr": 0.02819142505165966,
"acc_norm": 0.7688226036476489,
"acc_norm_stderr": 0.02871739914525888,
"mc1": 0.5862913096695227,
"mc1_stderr": 0.0172408618120998,
"mc2": 0.7322370420432542,
"mc2_stderr": 0.014094911817256119
},
"harness|arc:challenge|25": {
"acc": 0.7022184300341296,
"acc_stderr": 0.013363080107244482,
"acc_norm": 0.7303754266211604,
"acc_norm_stderr": 0.012968040686869155
},
"harness|hellaswag|10": {
"acc": 0.6692889862577176,
"acc_stderr": 0.004695076629884535,
"acc_norm": 0.8594901414060944,
"acc_norm_stderr": 0.003468050114923806
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.762962962962963,
"acc_stderr": 0.03673731683969506,
"acc_norm": 0.762962962962963,
"acc_norm_stderr": 0.03673731683969506
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.875,
"acc_stderr": 0.026913523521537846,
"acc_norm": 0.875,
"acc_norm_stderr": 0.026913523521537846
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909282,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909282
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7962264150943397,
"acc_stderr": 0.02479078450177541,
"acc_norm": 0.7962264150943397,
"acc_norm_stderr": 0.02479078450177541
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.875,
"acc_stderr": 0.02765610492929436,
"acc_norm": 0.875,
"acc_norm_stderr": 0.02765610492929436
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.65,
"acc_stderr": 0.04793724854411019,
"acc_norm": 0.65,
"acc_norm_stderr": 0.04793724854411019
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.4,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.4,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.7456647398843931,
"acc_stderr": 0.0332055644308557,
"acc_norm": 0.7456647398843931,
"acc_norm_stderr": 0.0332055644308557
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.5392156862745098,
"acc_stderr": 0.04959859966384181,
"acc_norm": 0.5392156862745098,
"acc_norm_stderr": 0.04959859966384181
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.83,
"acc_stderr": 0.03775251680686371,
"acc_norm": 0.83,
"acc_norm_stderr": 0.03775251680686371
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.7957446808510639,
"acc_stderr": 0.026355158413349424,
"acc_norm": 0.7957446808510639,
"acc_norm_stderr": 0.026355158413349424
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5877192982456141,
"acc_stderr": 0.04630653203366597,
"acc_norm": 0.5877192982456141,
"acc_norm_stderr": 0.04630653203366597
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.7310344827586207,
"acc_stderr": 0.036951833116502325,
"acc_norm": 0.7310344827586207,
"acc_norm_stderr": 0.036951833116502325
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.7354497354497355,
"acc_stderr": 0.02271746789770862,
"acc_norm": 0.7354497354497355,
"acc_norm_stderr": 0.02271746789770862
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5952380952380952,
"acc_stderr": 0.04390259265377562,
"acc_norm": 0.5952380952380952,
"acc_norm_stderr": 0.04390259265377562
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.6,
"acc_stderr": 0.049236596391733084,
"acc_norm": 0.6,
"acc_norm_stderr": 0.049236596391733084
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.9,
"acc_stderr": 0.01706640371965727,
"acc_norm": 0.9,
"acc_norm_stderr": 0.01706640371965727
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.645320197044335,
"acc_stderr": 0.03366124489051449,
"acc_norm": 0.645320197044335,
"acc_norm_stderr": 0.03366124489051449
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.82,
"acc_stderr": 0.038612291966536955,
"acc_norm": 0.82,
"acc_norm_stderr": 0.038612291966536955
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8666666666666667,
"acc_stderr": 0.026544435312706456,
"acc_norm": 0.8666666666666667,
"acc_norm_stderr": 0.026544435312706456
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.9292929292929293,
"acc_stderr": 0.018263105420199505,
"acc_norm": 0.9292929292929293,
"acc_norm_stderr": 0.018263105420199505
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9689119170984456,
"acc_stderr": 0.012525310625527033,
"acc_norm": 0.9689119170984456,
"acc_norm_stderr": 0.012525310625527033
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.8,
"acc_stderr": 0.020280805062535726,
"acc_norm": 0.8,
"acc_norm_stderr": 0.020280805062535726
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.46296296296296297,
"acc_stderr": 0.03040178640610151,
"acc_norm": 0.46296296296296297,
"acc_norm_stderr": 0.03040178640610151
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.8277310924369747,
"acc_stderr": 0.02452866497130541,
"acc_norm": 0.8277310924369747,
"acc_norm_stderr": 0.02452866497130541
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.4900662251655629,
"acc_stderr": 0.04081677107248437,
"acc_norm": 0.4900662251655629,
"acc_norm_stderr": 0.04081677107248437
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.9192660550458716,
"acc_stderr": 0.011680172292862083,
"acc_norm": 0.9192660550458716,
"acc_norm_stderr": 0.011680172292862083
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6574074074074074,
"acc_stderr": 0.032365852526021574,
"acc_norm": 0.6574074074074074,
"acc_norm_stderr": 0.032365852526021574
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9166666666666666,
"acc_stderr": 0.019398452135813905,
"acc_norm": 0.9166666666666666,
"acc_norm_stderr": 0.019398452135813905
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.9071729957805907,
"acc_stderr": 0.01888975055095671,
"acc_norm": 0.9071729957805907,
"acc_norm_stderr": 0.01888975055095671
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.8071748878923767,
"acc_stderr": 0.026478240960489365,
"acc_norm": 0.8071748878923767,
"acc_norm_stderr": 0.026478240960489365
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8778625954198473,
"acc_stderr": 0.02871877688934232,
"acc_norm": 0.8778625954198473,
"acc_norm_stderr": 0.02871877688934232
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8925619834710744,
"acc_stderr": 0.028268812192540637,
"acc_norm": 0.8925619834710744,
"acc_norm_stderr": 0.028268812192540637
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8981481481481481,
"acc_stderr": 0.029239272675632748,
"acc_norm": 0.8981481481481481,
"acc_norm_stderr": 0.029239272675632748
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.8773006134969326,
"acc_stderr": 0.025777328426978927,
"acc_norm": 0.8773006134969326,
"acc_norm_stderr": 0.025777328426978927
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5803571428571429,
"acc_stderr": 0.046840993210771065,
"acc_norm": 0.5803571428571429,
"acc_norm_stderr": 0.046840993210771065
},
"harness|hendrycksTest-management|5": {
"acc": 0.883495145631068,
"acc_stderr": 0.03176683948640406,
"acc_norm": 0.883495145631068,
"acc_norm_stderr": 0.03176683948640406
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.9401709401709402,
"acc_stderr": 0.01553751426325386,
"acc_norm": 0.9401709401709402,
"acc_norm_stderr": 0.01553751426325386
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.91,
"acc_stderr": 0.028762349126466136,
"acc_norm": 0.91,
"acc_norm_stderr": 0.028762349126466136
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.9106002554278416,
"acc_stderr": 0.010203017847688298,
"acc_norm": 0.9106002554278416,
"acc_norm_stderr": 0.010203017847688298
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.8236994219653179,
"acc_stderr": 0.020516425672490714,
"acc_norm": 0.8236994219653179,
"acc_norm_stderr": 0.020516425672490714
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.7854748603351955,
"acc_stderr": 0.01372892340782884,
"acc_norm": 0.7854748603351955,
"acc_norm_stderr": 0.01372892340782884
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.8529411764705882,
"acc_stderr": 0.020279402936174584,
"acc_norm": 0.8529411764705882,
"acc_norm_stderr": 0.020279402936174584
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.8102893890675241,
"acc_stderr": 0.022268196258783225,
"acc_norm": 0.8102893890675241,
"acc_norm_stderr": 0.022268196258783225
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8765432098765432,
"acc_stderr": 0.018303868806891794,
"acc_norm": 0.8765432098765432,
"acc_norm_stderr": 0.018303868806891794
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.6382978723404256,
"acc_stderr": 0.028663820147199478,
"acc_norm": 0.6382978723404256,
"acc_norm_stderr": 0.028663820147199478
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5834419817470665,
"acc_stderr": 0.012591153245057392,
"acc_norm": 0.5834419817470665,
"acc_norm_stderr": 0.012591153245057392
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.8161764705882353,
"acc_stderr": 0.023529242185193106,
"acc_norm": 0.8161764705882353,
"acc_norm_stderr": 0.023529242185193106
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.8137254901960784,
"acc_stderr": 0.01575052628436335,
"acc_norm": 0.8137254901960784,
"acc_norm_stderr": 0.01575052628436335
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7363636363636363,
"acc_stderr": 0.04220224692971987,
"acc_norm": 0.7363636363636363,
"acc_norm_stderr": 0.04220224692971987
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.8285714285714286,
"acc_stderr": 0.024127463462650146,
"acc_norm": 0.8285714285714286,
"acc_norm_stderr": 0.024127463462650146
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8855721393034826,
"acc_stderr": 0.022509345325101713,
"acc_norm": 0.8855721393034826,
"acc_norm_stderr": 0.022509345325101713
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.89,
"acc_stderr": 0.03144660377352203,
"acc_norm": 0.89,
"acc_norm_stderr": 0.03144660377352203
},
"harness|hendrycksTest-virology|5": {
"acc": 0.572289156626506,
"acc_stderr": 0.038515976837185335,
"acc_norm": 0.572289156626506,
"acc_norm_stderr": 0.038515976837185335
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8888888888888888,
"acc_stderr": 0.02410338420207286,
"acc_norm": 0.8888888888888888,
"acc_norm_stderr": 0.02410338420207286
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5862913096695227,
"mc1_stderr": 0.0172408618120998,
"mc2": 0.7322370420432542,
"mc2_stderr": 0.014094911817256119
},
"harness|winogrande|5": {
"acc": 0.8279400157853196,
"acc_stderr": 0.010607731615247007
},
"harness|gsm8k|5": {
"acc": 0.6520090978013646,
"acc_stderr": 0.013120581030382132
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_sumo43__Yi-32b-x2-v2.0 | [
"region:us"
] | 2024-01-17T11:09:02+00:00 | {"pretty_name": "Evaluation run of sumo43/Yi-32b-x2-v2.0", "dataset_summary": "Dataset automatically created during the evaluation run of model [sumo43/Yi-32b-x2-v2.0](https://huggingface.co/sumo43/Yi-32b-x2-v2.0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_sumo43__Yi-32b-x2-v2.0\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2024-01-17T11:06:51.060608](https://huggingface.co/datasets/open-llm-leaderboard/details_sumo43__Yi-32b-x2-v2.0/blob/main/results_2024-01-17T11-06-51.060608.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.7642971242141403,\n \"acc_stderr\": 0.02819142505165966,\n \"acc_norm\": 0.7688226036476489,\n \"acc_norm_stderr\": 0.02871739914525888,\n \"mc1\": 0.5862913096695227,\n \"mc1_stderr\": 0.0172408618120998,\n \"mc2\": 0.7322370420432542,\n \"mc2_stderr\": 0.014094911817256119\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.7022184300341296,\n \"acc_stderr\": 0.013363080107244482,\n \"acc_norm\": 0.7303754266211604,\n \"acc_norm_stderr\": 0.012968040686869155\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6692889862577176,\n \"acc_stderr\": 0.004695076629884535,\n \"acc_norm\": 0.8594901414060944,\n \"acc_norm_stderr\": 0.003468050114923806\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.762962962962963,\n \"acc_stderr\": 0.03673731683969506,\n \"acc_norm\": 0.762962962962963,\n \"acc_norm_stderr\": 0.03673731683969506\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.875,\n \"acc_stderr\": 0.026913523521537846,\n \"acc_norm\": 0.875,\n \"acc_norm_stderr\": 0.026913523521537846\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909282,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.04292346959909282\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7962264150943397,\n \"acc_stderr\": 0.02479078450177541,\n \"acc_norm\": 0.7962264150943397,\n \"acc_norm_stderr\": 0.02479078450177541\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.875,\n \"acc_stderr\": 0.02765610492929436,\n \"acc_norm\": 0.875,\n \"acc_norm_stderr\": 0.02765610492929436\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.04793724854411019,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.04793724854411019\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.7456647398843931,\n \"acc_stderr\": 0.0332055644308557,\n \"acc_norm\": 0.7456647398843931,\n \"acc_norm_stderr\": 0.0332055644308557\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.5392156862745098,\n \"acc_stderr\": 0.04959859966384181,\n \"acc_norm\": 0.5392156862745098,\n \"acc_norm_stderr\": 0.04959859966384181\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.83,\n \"acc_stderr\": 0.03775251680686371,\n \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.03775251680686371\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.7957446808510639,\n \"acc_stderr\": 0.026355158413349424,\n \"acc_norm\": 0.7957446808510639,\n \"acc_norm_stderr\": 0.026355158413349424\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5877192982456141,\n \"acc_stderr\": 0.04630653203366597,\n \"acc_norm\": 0.5877192982456141,\n \"acc_norm_stderr\": 0.04630653203366597\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.7310344827586207,\n \"acc_stderr\": 0.036951833116502325,\n \"acc_norm\": 0.7310344827586207,\n \"acc_norm_stderr\": 0.036951833116502325\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.7354497354497355,\n \"acc_stderr\": 0.02271746789770862,\n \"acc_norm\": 0.7354497354497355,\n \"acc_norm_stderr\": 0.02271746789770862\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5952380952380952,\n \"acc_stderr\": 0.04390259265377562,\n \"acc_norm\": 0.5952380952380952,\n \"acc_norm_stderr\": 0.04390259265377562\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.9,\n \"acc_stderr\": 0.01706640371965727,\n \"acc_norm\": 0.9,\n \"acc_norm_stderr\": 0.01706640371965727\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.645320197044335,\n \"acc_stderr\": 0.03366124489051449,\n \"acc_norm\": 0.645320197044335,\n \"acc_norm_stderr\": 0.03366124489051449\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.82,\n \"acc_stderr\": 0.038612291966536955,\n \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.038612291966536955\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8666666666666667,\n \"acc_stderr\": 0.026544435312706456,\n \"acc_norm\": 0.8666666666666667,\n \"acc_norm_stderr\": 0.026544435312706456\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.9292929292929293,\n \"acc_stderr\": 0.018263105420199505,\n \"acc_norm\": 0.9292929292929293,\n \"acc_norm_stderr\": 0.018263105420199505\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9689119170984456,\n \"acc_stderr\": 0.012525310625527033,\n \"acc_norm\": 0.9689119170984456,\n \"acc_norm_stderr\": 0.012525310625527033\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.020280805062535726,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.020280805062535726\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.46296296296296297,\n \"acc_stderr\": 0.03040178640610151,\n \"acc_norm\": 0.46296296296296297,\n \"acc_norm_stderr\": 0.03040178640610151\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.8277310924369747,\n \"acc_stderr\": 0.02452866497130541,\n \"acc_norm\": 0.8277310924369747,\n \"acc_norm_stderr\": 0.02452866497130541\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.4900662251655629,\n \"acc_stderr\": 0.04081677107248437,\n \"acc_norm\": 0.4900662251655629,\n \"acc_norm_stderr\": 0.04081677107248437\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.9192660550458716,\n \"acc_stderr\": 0.011680172292862083,\n \"acc_norm\": 0.9192660550458716,\n \"acc_norm_stderr\": 0.011680172292862083\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.6574074074074074,\n \"acc_stderr\": 0.032365852526021574,\n \"acc_norm\": 0.6574074074074074,\n \"acc_norm_stderr\": 0.032365852526021574\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.9166666666666666,\n \"acc_stderr\": 0.019398452135813905,\n \"acc_norm\": 0.9166666666666666,\n \"acc_norm_stderr\": 0.019398452135813905\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.9071729957805907,\n \"acc_stderr\": 0.01888975055095671,\n \"acc_norm\": 0.9071729957805907,\n \"acc_norm_stderr\": 0.01888975055095671\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.8071748878923767,\n \"acc_stderr\": 0.026478240960489365,\n \"acc_norm\": 0.8071748878923767,\n \"acc_norm_stderr\": 0.026478240960489365\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8778625954198473,\n \"acc_stderr\": 0.02871877688934232,\n \"acc_norm\": 0.8778625954198473,\n \"acc_norm_stderr\": 0.02871877688934232\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8925619834710744,\n \"acc_stderr\": 0.028268812192540637,\n \"acc_norm\": 0.8925619834710744,\n \"acc_norm_stderr\": 0.028268812192540637\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8981481481481481,\n \"acc_stderr\": 0.029239272675632748,\n \"acc_norm\": 0.8981481481481481,\n \"acc_norm_stderr\": 0.029239272675632748\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.8773006134969326,\n \"acc_stderr\": 0.025777328426978927,\n \"acc_norm\": 0.8773006134969326,\n \"acc_norm_stderr\": 0.025777328426978927\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5803571428571429,\n \"acc_stderr\": 0.046840993210771065,\n \"acc_norm\": 0.5803571428571429,\n \"acc_norm_stderr\": 0.046840993210771065\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.883495145631068,\n \"acc_stderr\": 0.03176683948640406,\n \"acc_norm\": 0.883495145631068,\n \"acc_norm_stderr\": 0.03176683948640406\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9401709401709402,\n \"acc_stderr\": 0.01553751426325386,\n \"acc_norm\": 0.9401709401709402,\n \"acc_norm_stderr\": 0.01553751426325386\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.91,\n \"acc_stderr\": 0.028762349126466136,\n \"acc_norm\": 0.91,\n \"acc_norm_stderr\": 0.028762349126466136\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.9106002554278416,\n \"acc_stderr\": 0.010203017847688298,\n \"acc_norm\": 0.9106002554278416,\n \"acc_norm_stderr\": 0.010203017847688298\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.8236994219653179,\n \"acc_stderr\": 0.020516425672490714,\n \"acc_norm\": 0.8236994219653179,\n \"acc_norm_stderr\": 0.020516425672490714\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.7854748603351955,\n \"acc_stderr\": 0.01372892340782884,\n \"acc_norm\": 0.7854748603351955,\n \"acc_norm_stderr\": 0.01372892340782884\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.8529411764705882,\n \"acc_stderr\": 0.020279402936174584,\n \"acc_norm\": 0.8529411764705882,\n \"acc_norm_stderr\": 0.020279402936174584\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.8102893890675241,\n \"acc_stderr\": 0.022268196258783225,\n \"acc_norm\": 0.8102893890675241,\n \"acc_norm_stderr\": 0.022268196258783225\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.8765432098765432,\n \"acc_stderr\": 0.018303868806891794,\n \"acc_norm\": 0.8765432098765432,\n \"acc_norm_stderr\": 0.018303868806891794\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.6382978723404256,\n \"acc_stderr\": 0.028663820147199478,\n \"acc_norm\": 0.6382978723404256,\n \"acc_norm_stderr\": 0.028663820147199478\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5834419817470665,\n \"acc_stderr\": 0.012591153245057392,\n \"acc_norm\": 0.5834419817470665,\n \"acc_norm_stderr\": 0.012591153245057392\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.8161764705882353,\n \"acc_stderr\": 0.023529242185193106,\n \"acc_norm\": 0.8161764705882353,\n \"acc_norm_stderr\": 0.023529242185193106\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.8137254901960784,\n \"acc_stderr\": 0.01575052628436335,\n \"acc_norm\": 0.8137254901960784,\n \"acc_norm_stderr\": 0.01575052628436335\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7363636363636363,\n \"acc_stderr\": 0.04220224692971987,\n \"acc_norm\": 0.7363636363636363,\n \"acc_norm_stderr\": 0.04220224692971987\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.8285714285714286,\n \"acc_stderr\": 0.024127463462650146,\n \"acc_norm\": 0.8285714285714286,\n \"acc_norm_stderr\": 0.024127463462650146\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8855721393034826,\n \"acc_stderr\": 0.022509345325101713,\n \"acc_norm\": 0.8855721393034826,\n \"acc_norm_stderr\": 0.022509345325101713\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.89,\n \"acc_stderr\": 0.03144660377352203,\n \"acc_norm\": 0.89,\n \"acc_norm_stderr\": 0.03144660377352203\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.572289156626506,\n \"acc_stderr\": 0.038515976837185335,\n \"acc_norm\": 0.572289156626506,\n \"acc_norm_stderr\": 0.038515976837185335\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8888888888888888,\n \"acc_stderr\": 0.02410338420207286,\n \"acc_norm\": 0.8888888888888888,\n \"acc_norm_stderr\": 0.02410338420207286\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5862913096695227,\n \"mc1_stderr\": 0.0172408618120998,\n \"mc2\": 0.7322370420432542,\n \"mc2_stderr\": 0.014094911817256119\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8279400157853196,\n \"acc_stderr\": 0.010607731615247007\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6520090978013646,\n \"acc_stderr\": 0.013120581030382132\n }\n}\n```", "repo_url": "https://huggingface.co/sumo43/Yi-32b-x2-v2.0", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|arc:challenge|25_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|gsm8k|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hellaswag|10_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-anatomy|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-astronomy|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-college_biology|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-college_physics|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-computer_security|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-econometrics|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-global_facts|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-human_aging|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-international_law|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-management|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-marketing|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-nutrition|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-philosophy|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-prehistory|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-professional_law|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-public_relations|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-security_studies|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-sociology|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-virology|5_2024-01-17T11-06-51.060608.parquet", "**/details_harness|hendrycksTest-world_religions|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["**/details_harness|winogrande|5_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2024-01-17T11-06-51.060608.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2024_01_17T11_06_51.060608", "path": ["results_2024-01-17T11-06-51.060608.parquet"]}, {"split": "latest", "path": ["results_2024-01-17T11-06-51.060608.parquet"]}]}]} | 2024-01-17T11:09:21+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of sumo43/Yi-32b-x2-v2.0
Dataset automatically created during the evaluation run of model sumo43/Yi-32b-x2-v2.0 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2024-01-17T11:06:51.060608(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of sumo43/Yi-32b-x2-v2.0\n\n\n\nDataset automatically created during the evaluation run of model sumo43/Yi-32b-x2-v2.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-17T11:06:51.060608(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of sumo43/Yi-32b-x2-v2.0\n\n\n\nDataset automatically created during the evaluation run of model sumo43/Yi-32b-x2-v2.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2024-01-17T11:06:51.060608(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
253440f8a74dc09a331059b3a372988a819825b9 |
# **About**
This dataset is the formated version of the Isaak-Carter/Openai-function-invocations-20k-with-greetings dataset.
This dataset, uniquely structured with custom special tokens, is meticulously crafted to train language models in complex function invocation and time-contextualized interactions. Each "sample" in the dataset contains a sequence of elements: function definitions, user prompts, function calls, function responses, and the assistant's responses. These elements are separated by custom special tokens, enhancing the dataset's structure for more effective parsing and training.
Key Highlights:
- **Function Definition**: Detailed descriptions of functions, including parameters and types, enabling the model to understand and simulate API-like interactions.
- **User Prompts**: Varied user queries, encouraging the model to handle a diverse range of function-related requests.
- **Function Calls & Responses**: Simulated API call and response patterns, illustrating practical applications of function calls.
- **Time-Contextualized Assistant Responses**: The assistant's responses vary based on the time of the day, indicated by the context. This feature is pivotal in creating AI models that offer time-sensitive responses, from standard greetings to thoughtful reminders for rest during late hours.
Applications:
- **AI Assistants**: Training conversational AI that can understand and interact based on function calls and time context.
- **API Interaction Simulation**: Models that can simulate API interactions based on user requests.
- **Context-Aware Systems**: Developing systems that respond differently based on the time of interaction.
This dataset is a versatile tool for advancing AI capabilities in understanding complex queries, simulating API interactions, and providing context-aware responses. | Isaak-Carter/Formated-openai-function-invocations-20k-with-greetings | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:summarization",
"task_categories:conversational",
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"region:us"
] | 2024-01-17T11:14:27+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification", "question-answering", "summarization", "conversational", "text-generation"], "dataset_info": {"features": [{"name": "sample", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20270230, "num_examples": 20432}], "download_size": 6760479, "dataset_size": 20270230}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-17T11:36:41+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_categories-question-answering #task_categories-summarization #task_categories-conversational #task_categories-text-generation #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us
|
# About
This dataset is the formated version of the Isaak-Carter/Openai-function-invocations-20k-with-greetings dataset.
This dataset, uniquely structured with custom special tokens, is meticulously crafted to train language models in complex function invocation and time-contextualized interactions. Each "sample" in the dataset contains a sequence of elements: function definitions, user prompts, function calls, function responses, and the assistant's responses. These elements are separated by custom special tokens, enhancing the dataset's structure for more effective parsing and training.
Key Highlights:
- Function Definition: Detailed descriptions of functions, including parameters and types, enabling the model to understand and simulate API-like interactions.
- User Prompts: Varied user queries, encouraging the model to handle a diverse range of function-related requests.
- Function Calls & Responses: Simulated API call and response patterns, illustrating practical applications of function calls.
- Time-Contextualized Assistant Responses: The assistant's responses vary based on the time of the day, indicated by the context. This feature is pivotal in creating AI models that offer time-sensitive responses, from standard greetings to thoughtful reminders for rest during late hours.
Applications:
- AI Assistants: Training conversational AI that can understand and interact based on function calls and time context.
- API Interaction Simulation: Models that can simulate API interactions based on user requests.
- Context-Aware Systems: Developing systems that respond differently based on the time of interaction.
This dataset is a versatile tool for advancing AI capabilities in understanding complex queries, simulating API interactions, and providing context-aware responses. | [
"# About\n\nThis dataset is the formated version of the Isaak-Carter/Openai-function-invocations-20k-with-greetings dataset.\n\nThis dataset, uniquely structured with custom special tokens, is meticulously crafted to train language models in complex function invocation and time-contextualized interactions. Each \"sample\" in the dataset contains a sequence of elements: function definitions, user prompts, function calls, function responses, and the assistant's responses. These elements are separated by custom special tokens, enhancing the dataset's structure for more effective parsing and training.\n\nKey Highlights:\n- Function Definition: Detailed descriptions of functions, including parameters and types, enabling the model to understand and simulate API-like interactions.\n- User Prompts: Varied user queries, encouraging the model to handle a diverse range of function-related requests.\n- Function Calls & Responses: Simulated API call and response patterns, illustrating practical applications of function calls.\n- Time-Contextualized Assistant Responses: The assistant's responses vary based on the time of the day, indicated by the context. This feature is pivotal in creating AI models that offer time-sensitive responses, from standard greetings to thoughtful reminders for rest during late hours.\n\nApplications:\n- AI Assistants: Training conversational AI that can understand and interact based on function calls and time context.\n- API Interaction Simulation: Models that can simulate API interactions based on user requests.\n- Context-Aware Systems: Developing systems that respond differently based on the time of interaction.\n\nThis dataset is a versatile tool for advancing AI capabilities in understanding complex queries, simulating API interactions, and providing context-aware responses."
] | [
"TAGS\n#task_categories-text-classification #task_categories-question-answering #task_categories-summarization #task_categories-conversational #task_categories-text-generation #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us \n",
"# About\n\nThis dataset is the formated version of the Isaak-Carter/Openai-function-invocations-20k-with-greetings dataset.\n\nThis dataset, uniquely structured with custom special tokens, is meticulously crafted to train language models in complex function invocation and time-contextualized interactions. Each \"sample\" in the dataset contains a sequence of elements: function definitions, user prompts, function calls, function responses, and the assistant's responses. These elements are separated by custom special tokens, enhancing the dataset's structure for more effective parsing and training.\n\nKey Highlights:\n- Function Definition: Detailed descriptions of functions, including parameters and types, enabling the model to understand and simulate API-like interactions.\n- User Prompts: Varied user queries, encouraging the model to handle a diverse range of function-related requests.\n- Function Calls & Responses: Simulated API call and response patterns, illustrating practical applications of function calls.\n- Time-Contextualized Assistant Responses: The assistant's responses vary based on the time of the day, indicated by the context. This feature is pivotal in creating AI models that offer time-sensitive responses, from standard greetings to thoughtful reminders for rest during late hours.\n\nApplications:\n- AI Assistants: Training conversational AI that can understand and interact based on function calls and time context.\n- API Interaction Simulation: Models that can simulate API interactions based on user requests.\n- Context-Aware Systems: Developing systems that respond differently based on the time of interaction.\n\nThis dataset is a versatile tool for advancing AI capabilities in understanding complex queries, simulating API interactions, and providing context-aware responses."
] |
3d4021b58c91c8c4ef6521045dd982ea5c7b0796 |
# Dataset of Wandjina/ワンジナ/旺吉娜 (Fate/Grand Order)
This is the dataset of Wandjina/ワンジナ/旺吉娜 (Fate/Grand Order), containing 22 images and their tags.
The core tags of this character are `short_hair, yellow_eyes, breasts, black_hair, dark_skin, dark-skinned_female, small_breasts, brown_eyes`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:----------|:--------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 22 | 30.91 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wandjina_fgo/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 22 | 16.59 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wandjina_fgo/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 52 | 35.53 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wandjina_fgo/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 22 | 27.65 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wandjina_fgo/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 52 | 51.38 MiB | [Download](https://huggingface.co/datasets/CyberHarem/wandjina_fgo/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/wandjina_fgo',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------------------------------------------------------------------|
| 0 | 22 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, solo, looking_at_viewer, navel, smile, clothing_cutout, open_mouth, white_background |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | solo | looking_at_viewer | navel | smile | clothing_cutout | open_mouth | white_background |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------|:-------|:--------------------|:--------|:--------|:------------------|:-------------|:-------------------|
| 0 | 22 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X |
| CyberHarem/wandjina_fgo | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2024-01-17T11:17:38+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2024-01-17T11:23:26+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of Wandjina/ワンジナ/旺吉娜 (Fate/Grand Order)
===============================================
This is the dataset of Wandjina/ワンジナ/旺吉娜 (Fate/Grand Order), containing 22 images and their tags.
The core tags of this character are 'short\_hair, yellow\_eyes, breasts, black\_hair, dark\_skin, dark-skinned\_female, small\_breasts, brown\_eyes', which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
List of Packages
----------------
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code
List of Clusters
----------------
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
### Table Version
| [
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n",
"### Load Raw Dataset with Waifuc\n\n\nWe provide raw dataset (including tagged images) for waifuc loading. If you need this, just run the following code\n\n\nList of Clusters\n----------------\n\n\nList of tag clustering result, maybe some outfits can be mined here.",
"### Raw Text Version",
"### Table Version"
] |