DCLM-7B / README.md
vaishaal's picture
Update README.md
fc78421 verified
|
raw
history blame
4.27 kB
metadata
license: apple-ascl
DCLM Logo

Model Card for DCLM-Baseline-7B

DCLM-Baseline-7B is a 7 billion parameter language model trained on the DCLM-Baseline dataset, which was curated as part of the DataComp for Language Models (DCLM) benchmark. This model is designed to showcase the effectiveness of systematic data curation techniques for improving language model performance.

Model Details

Size Training Tokens Layers Hidden Size Attention Heads Context Length
7B 2.5T 32 4096 32 2048

Model Description

  • Developed by: DataComp for Language Models (DCLM) Team
  • Model type: Decoder-only Transformer language model
  • Language(s): English (primarily)
  • License: Apple Sample Code License
  • Contact: [email protected]
  • Date: June 2024

Model Sources

Uses

Inference

To use the model for inference:

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("datacomp/dclm-baseline-7b")
tokenizer = AutoTokenizer.from_pretrained("datacomp/dclm-baseline-7b")

prompt = "Language modeling is"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0]))

Training Details

The model was trained using the following setup:

  • Architecture: Decoder-only Transformer
  • Framework: PyTorch with OpenLM
  • Optimizer: AdamW
  • Learning Rate: 2e-3 (peak)
  • Weight Decay: 0.05
  • Batch Size: 2048 sequences
  • Sequence Length: 2048 tokens
  • Total Training Tokens: 2.6T
  • Hardware: Trained on H100 GPUs

For more detailed training information, please refer to Section 3.4 and Appendix F of the DCLM paper.

Evaluation

Here are the evaluation results for DCLM-Baseline-7B on various tasks:

Task Score
CORE 57.1
MMLU (5-shot) 63.7
EXTENDED 45.4
ARC Challenge 57.68
ARC Easy 81.82
BoolQ 83.36
COPA 87.00
HellaSwag 80.68
OpenBookQA 46.40
PIQA 80.85
Winogrande 73.80
AGI Eval LSAT AR (3-shot) 29.57
GSM8K (CoT) 17.13

For a complete list of evaluation results, please refer to the full evaluation JSON file.

Limitations and Biases

While DCLM-Baseline-7B demonstrates strong performance across a range of tasks, it's important to note:

  1. The model may exhibit biases present in its training data, which is derived from web crawl data.
  2. It has not undergone specific alignment or safety fine-tuning, so outputs should be used with caution.
  3. Performance on tasks not included in the evaluation suite may vary.
  4. The model's knowledge is limited to its training data cutoff date.

Ethical Considerations

Users should be aware that this model, like all large language models, can potentially generate harmful or biased content. It should not be used for making decisions about individuals or in sensitive applications without appropriate safeguards and human oversight.

Citation

If you use this model in your research, please cite:

@article{Li2024DataCompLM,
  title={DataComp-LM: In search of the next generation of training sets for language models},
  author={Jeffrey Li and Alex Fang and Georgios Smyrnis and Maor Ivgi and Matt Jordan and Samir Gadre and Hritik Bansal and Etash Guha and Sedrick Keh and Kushal Arora and [... full author list]},
  journal={arXiv preprint arXiv:2406.11794},
  year={2024}
}