language:
- en
license: apache-2.0
library_name: transformers
pipeline_tag: text-generation
model-index:
- name: neuronovo-7B-v0.1
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.98
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Neuronovo/neuronovo-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.07
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Neuronovo/neuronovo-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 63.33
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Neuronovo/neuronovo-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 53.95
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Neuronovo/neuronovo-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.14
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Neuronovo/neuronovo-7B-v0.1
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 37.68
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Neuronovo/neuronovo-7B-v0.1
name: Open LLM Leaderboard
The model described by the provided code, named "Neuronovo/neuronovo-9B-v0.1," is a sophisticated and fine-tuned version of a large language model, originally based on the "teknium/OpenHermes-2.5-Mistral-7B." This model exhibits several distinct characteristics and functionalities as derived from the code snippet:
Dataset and Preprocessing: It is trained on a dataset named "Intel/orca_dpo_pairs," which is likely a specialized dataset for dialogue systems. The data is preprocessed to format dialogues, with specific attention to system messages, user queries, chosen answers, and rejected answers.
Tokenizer: The model utilizes a tokenizer from the original "OpenHermes-2.5-Mistral-7B" model. This tokenizer is configured to have the end-of-sequence token as the padding token and pads from the left, indicating a particular focus on language generation tasks.
LoRA Configuration: The model employs a LoRA (Low-Rank Adaptation) configuration with specific parameters (r=16, lora_alpha=16, etc.) and targets multiple modules within the transformer architecture. This suggests an approach focused on efficient fine-tuning and adaptation of the model while preserving the majority of the pre-trained weights.
Fine-Tuning Specifications: The model is fine-tuned using a custom training setup, including a special DPO (Data Parallel Optimization) Trainer. This indicates an advanced fine-tuning process that likely emphasizes both efficiency and effectiveness, possibly with a focus on parallel processing and optimization.
Training Arguments: The training uses specific arguments like a cosine learning rate scheduler, paged AdamW optimizer, and training in 4-bit precision (indicating a focus on memory efficiency). It also employs gradient checkpointing and accumulation steps, which are typical in training large models efficiently.
Performance and Output: The model is configured for causal language modeling (indicative of generating text or continuing dialogues), with a maximum prompt length of 1024 and maximum generation length of 1536 tokens. This setup suggests its capability for handling extended dialogues or text generation tasks.
Special Features: The use of LoRA, DPO training, and specific fine-tuning methods highlight the model's advanced capabilities in adapting large-scale language models to specific tasks or datasets while maintaining computational efficiency.
In summary, "Neuronovo/neuronovo-9B-v0.1" is a highly specialized, efficient, and capable large language model fine-tuned for advanced language generation tasks, particularly in the context of dialogues or interactions, leveraging cutting-edge techniques in NLP model adaptation and training.
license: apache-2.0
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 64.19 |
AI2 Reasoning Challenge (25-Shot) | 66.98 |
HellaSwag (10-Shot) | 85.07 |
MMLU (5-Shot) | 63.33 |
TruthfulQA (0-shot) | 53.95 |
Winogrande (5-shot) | 78.14 |
GSM8k (5-shot) | 37.68 |