Lucie-7B / README.md
Jeronymous's picture
add final number of steps/tokens (after annealing)
d5a2919
|
raw
history blame
11.4 kB
metadata
license: apache-2.0
pipeline_tag: text-generation
language:
  - fr
  - en
  - it
  - de
  - es
tags:
  - pretrained
  - llama-3
  - openllm-france
datasets:
  - OpenLLM-France/Lucie-Training-Dataset
widget:
  - text: |-
      Quelle est la capitale de l'Espagne ? Madrid.
      Quelle est la capitale de la France ?
    example_title: Capital cities in French
    group: 1-shot Question Answering
training_progress:
  num_steps: 756291
  num_tokens: 3131736326144
  context_length: 32000

Model Card for Lucie-7B

Model Description

Lucie-7B is a pretrained 7B parameter causal language model built by LINAGORA and OpenLLM-France, available under the Apache 2.0 license.

Lucie-7B was trained on 3 trillion tokens of multilingual data, including English (33.2%), French (32.4%), German (6.9%), Spanish (6.6%), Italian (3.8%), and parallel data from those languages (2.5%), as well as several programming languages (14.7%).

Example code in python

Load the model

Load the model (quantized version on GPU if possible, for efficient inference):

import transformers

model_name = "OpenLLM-France/Lucie-7B"

tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
model = transformers.AutoModelForCausalLM.from_pretrained(model_name,
    device_map="auto",
    load_in_4bit=True       # For efficient inference, if quantization is supported by the GPU card
)

Sentence completion

Wrap the model in a text generation pipeline, and prepare some generation parameters:

pipeline = transformers.pipeline("text-generation", model=model, tokenizer=tokenizer)

generation_kwargs = dict(
    num_return_sequences=1,               # Number of variants to generate.
    return_full_text= False,              # Do not include the prompt in the generated text.
    do_sample=True,
    temperature=1.0, top_p=1, top_k=None, # Sampling parameters.
    max_new_tokens=200,                   # Maximum length for the output text (in number of tokens).
)

Try 1-shot question answering:

prompt = """\
Quelle est la capitale de l'Espagne ? Madrid\n\
Quelle est la capitale de la France ?\
"""
completions = pipeline(prompt, **generation_kwargs)
for completion in completions:
    print(prompt + " […]" + completion['generated_text'])

This will print something like:

Quelle est la capitale de l'Espagne ? Madrid
Quelle est la capitale de la France ? […] Paris
Quelle est la capitale de l'Italie? Rome
Quelle est la capitale de la Grande-Bretagne? Londres
Quelle est la capitale de la Suisse? Berne
Quelle est la capitale du Portugal? Lisbonne
Quelle est la capitale de l'Algérie? Alger
...

If running on GPU (cuda device), you will need at least 6GB of VRAM to run inference using 4bit quantization (16GB of VRAM without 4bit quantization).

Load a checkpoint

Checkpoints at several training steps are available under revision tags, every 5000 steps during the first 25000 steps, and then every 25000 steps.

Intermediate checkpoints can be loaded using the revision parameter:

model = transformers.AutoModelForCausalLM.from_pretrained(model_name,
    revision="step0753851",
    ...
)

where revision can be one of:

Training Details

Training Data

The training dataset used for the pretraining of Lucie-7B is available at OpenLLM-France/Lucie-Training-Dataset.

The initial composition of the training data is as follows:

Initial Data Composition

Some of the data was upsampled to balance the training data distribution, and the final composition is as follows:

Training Data Composition

Training Procedure

Lucie-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).

It was pre-trained on 512 H100 80GB GPUs for about 550,000 GPU hours on Jean Zay supercomputer.

The training code is available at https://github.com/OpenLLM-France/Lucie-Training. It is based on this fork of Megatron-DeepSpeed.

Optimizer checkpoints are available at OpenLLM-France/Lucie-7B-optimizer-states.

Neural Network Architecture

Lucie-7B has the same neural network architecture as Llama3.1. It has exactly 6 706 958 336 free parameters, with the following hyperparameters:

Hyperparameter Value
Vocabulary size (# tokens) 65 024
# transformer blocks 32
# attention heads 32
# key-value heads 8
Hidden size 4 096
Feed-Forward hidden size 12 288
Activation silu
RMS norm epsilon 1e-5

The parameter "theta" of Rotary Positional Embedding (RoPE) varied during the training process and is indicated in the tables with training hyperparameters below.

Training Hyperparameters

The training consisted of three main phases:

  1. Main pre-training on 3.1T tokens, with a context length of 4096,
  2. Context extension on 5B tokens, with a context length of 32000,
  3. Annealing, with a selected subset of the training data with especially high quality.

The details of each phase are given below.

1. Main pre-training

Training hyperparameters in torch/Megatron-DeepSpeed were the following:

Hyperparameter Value
Total # samples 762 144 586 (3.1T tokens)
Total # steps 753 851
RoPE theta 500 000
Context length 4 096
Initial Batch size 256
Final Batch size 1 024
Batch size rampup by steps of 64 over 10M samples
Learning rate schedule warmup (2M samples) + cosine annealing
Maximum Learning rate 3e-4
Final Learning rate 3e-5
Weight decay 0.1
Dropout _
Gradient clipping 1
Initializer range 0.009
Optimizer AdamW (β₁=0.9, β₂=0.95, ε=1e-5)
Precision bfloat16
Tensor Parallelism (with 512 GPUs) 4
Pipeline Parallelism (with 512 GPUs) 4
Data Parallelism (with 512 GPUs) 32

2. Context Extension

Training hyperparameters are the same as above, with the following changes:

Hyperparameter Value
Total # samples 156 250 (5B tokens)
Total # steps 1 220
RoPE theta 20 000 000
Context length 32 000
Batch size 128
Learning rate 2e-5
Tensor Parallelism (with 128 GPUs) 4
Pipeline Parallelism (with 128 GPUs) 4
Data Parallelism (with 128 GPUs) 8

3. Annealing

TODO

Training logs and learning curves

🚧 work in progress 🚧

Training logs can be found in Tensorboard format in:

  • metadata/training_logs/
    ├── 1_pretraining.zip training logs for the first pre-training phases, in a zip file. Each file in the zip corresponds to a job of at most 20H of training (parallelized over 512 GPUs).
    └── 2_extension/ folder containing the training log for the context extension phase, which was done in a single job of around 13H of training (parallelized over 128 GPUs).

Acknowledgements

This work was performed using HPC resources from GENCI–IDRIS (Grant 2024-GC011015444).

Lucie-7B was created by members of LINAGORA and OpenLLM-France community, including in alphabetical order: Christophe Cerisara (LORIA), Evan Dufraisse (CEA), Julie Hunter (LINAGORA), Jean-Pierre Lorré (LINAGORA), Jérôme Louradour (LINAGORA), Michel-Marie Maudet (LINAGORA), Olivier Gouvert (LINAGORA), Pierre-Carl Langlais (OpSci), Yaya Sy (LORIA).

Contact

[email protected]