metadata
language:
- en
license: apache-2.0
model-index:
- name: luxia-21.4b-alignment-v1.0
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 36.93
name: strict accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=saltlux/luxia-21.4b-alignment-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 48.02
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=saltlux/luxia-21.4b-alignment-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 6.19
name: exact match
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=saltlux/luxia-21.4b-alignment-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 6.82
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=saltlux/luxia-21.4b-alignment-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 12.51
name: acc_norm
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=saltlux/luxia-21.4b-alignment-v1.0
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 26.7
name: accuracy
source:
url: >-
https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=saltlux/luxia-21.4b-alignment-v1.0
name: Open LLM Leaderboard
Introduction
We introduce luxia-21.4b-alignment-v1.0, an instruction-tuned and alignment model based on luxia-21.4b. Please refer to the evaluation results table for details.
Instruction Fine-tuning Strategy
We utilize state-of-the-art instruction fine-tuning methods including supervised fine-tuning (SFT) and direct preference optimization (DPO)
Data Contamination Test Results
Results will be updated soon.
Evaluation Results
Results will be updated soon.
Usage Instructions
How to use
# pip install transformers==4.35.2
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("saltlux/luxia-21.4b-alignment-v0.1")
model = AutoModelForCausalLM.from_pretrained(
"saltlux/luxia-21.4b-alignment-v0.1",
device_map="auto",
torch_dtype=torch.float16,
)
License
- saltlux/luxia-21.4b-alignment-v1.0: apache-2.0
Contact Us
Any questions and suggestions are welcomed at the discussion tab.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 22.86 |
IFEval (0-Shot) | 36.93 |
BBH (3-Shot) | 48.02 |
MATH Lvl 5 (4-Shot) | 6.19 |
GPQA (0-shot) | 6.82 |
MuSR (0-shot) | 12.51 |
MMLU-PRO (5-shot) | 26.70 |