--- license: apache-2.0 model-index: - name: Synatra-RP-Orca-2-7b-v0.1 results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 57.68 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=maywell/Synatra-RP-Orca-2-7b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 77.37 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=maywell/Synatra-RP-Orca-2-7b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 56.1 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=maywell/Synatra-RP-Orca-2-7b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 52.52 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=maywell/Synatra-RP-Orca-2-7b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 74.59 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=maywell/Synatra-RP-Orca-2-7b-v0.1 name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 39.65 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=maywell/Synatra-RP-Orca-2-7b-v0.1 name: Open LLM Leaderboard --- # **Synatra-RP-Orca-2-7b-v0.1🐧** ## Support Me Synatra is a personal project and is being developed with one person's resources. If you like the model, how about a little research funding? [Buy me a Coffee](https://www.buymeacoffee.com/mwell) Wanna be a sponser? (Please) Contact me on Telegram **AlzarTakkarsen** # **Model Details** **Base Model** microsoft/Orca-2-7b **Model Description** It's a test RP sft model. Finetuned from microsoft/Orca-2-7b. **Trained On** A100 80GB * 1 **Instruction format** Alpaca(Better), ChatML # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_maywell__Synatra-RP-Orca-2-7b-v0.1) | Metric |Value| |---------------------------------|----:| |Avg. |59.65| |AI2 Reasoning Challenge (25-Shot)|57.68| |HellaSwag (10-Shot) |77.37| |MMLU (5-Shot) |56.10| |TruthfulQA (0-shot) |52.52| |Winogrande (5-shot) |74.59| |GSM8k (5-shot) |39.65|