Spaces:
Runtime error
Runtime error
# -*- coding: utf-8 -*- | |
"""“language_modeling.ipynb”的副本 | |
Automatically generated by Colab. | |
Original file is located at | |
https://colab.research.google.com/drive/1baqtirf_2hHx2-byvSi0iZo4g_5Rm_nZ | |
""" | |
# Transformers installation | |
! pip install transformers datasets | |
# To install from source instead of the last release, comment the command above and uncomment the following one. | |
# ! pip install git+https://github.com/huggingface/transformers.git | |
"""# Causal language modeling | |
There are two types of language modeling, causal and masked. This guide illustrates causal language modeling. | |
Causal language models are frequently used for text generation. You can use these models for creative applications like | |
choosing your own text adventure or an intelligent coding assistant like Copilot or CodeParrot. | |
""" | |
#@title | |
from IPython.display import HTML | |
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/Vpjb1lu0MDk?rel=0&controls=0&showinfo=0" frameborder="0" allowfullscreen></iframe>') | |
"""Causal language modeling predicts the next token in a sequence of tokens, and the model can only attend to tokens on | |
the left. This means the model cannot see future tokens. GPT-2 is an example of a causal language model. | |
This guide will show you how to: | |
1. Finetune [DistilGPT2](https://huggingface.co/distilgpt2) on the [r/askscience](https://www.reddit.com/r/askscience/) subset of the [ELI5](https://huggingface.co/datasets/eli5) dataset. | |
2. Use your finetuned model for inference. | |
<Tip> | |
You can finetune other architectures for causal language modeling following the same steps in this guide. | |
Choose one of the following architectures: | |
<!--This tip is automatically generated by `make fix-copies`, do not fill manually!--> | |
[BART](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/bart), [BERT](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/bert), [Bert Generation](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/bert-generation), [BigBird](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/big_bird), [BigBird-Pegasus](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/bigbird_pegasus), [BioGpt](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/biogpt), [Blenderbot](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/blenderbot), [BlenderbotSmall](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/blenderbot-small), [BLOOM](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/bloom), [CamemBERT](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/camembert), [CodeGen](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/codegen), [CPM-Ant](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/cpmant), [CTRL](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/ctrl), [Data2VecText](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/data2vec-text), [ELECTRA](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/electra), [ERNIE](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/ernie), [GIT](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/git), [GPT-Sw3](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/gpt-sw3), [OpenAI GPT-2](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/gpt2), [GPTBigCode](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/gpt_bigcode), [GPT Neo](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/gpt_neo), [GPT NeoX](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/gpt_neox), [GPT NeoX Japanese](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/gpt_neox_japanese), [GPT-J](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/gptj), [LLaMA](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/llama), [Marian](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/marian), [mBART](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/mbart), [MEGA](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/mega), [Megatron-BERT](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/megatron-bert), [MVP](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/mvp), [OpenLlama](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/open-llama), [OpenAI GPT](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/openai-gpt), [OPT](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/opt), [Pegasus](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/pegasus), [PLBart](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/plbart), [ProphetNet](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/prophetnet), [QDQBert](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/qdqbert), [Reformer](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/reformer), [RemBERT](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/rembert), [RoBERTa](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/roberta), [RoBERTa-PreLayerNorm](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/roberta-prelayernorm), [RoCBert](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/roc_bert), [RoFormer](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/roformer), [RWKV](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/rwkv), [Speech2Text2](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/speech_to_text_2), [Transformer-XL](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/transfo-xl), [TrOCR](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/trocr), [XGLM](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/xglm), [XLM](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/xlm), [XLM-ProphetNet](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/xlm-prophetnet), [XLM-RoBERTa](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/xlm-roberta), [XLM-RoBERTa-XL](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/xlm-roberta-xl), [XLNet](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/xlnet), [X-MOD](https://huggingface.co/docs/transformers/main/en/tasks/../model_doc/xmod) | |
<!--End of the generated tip--> | |
</Tip> | |
Before you begin, make sure you have all the necessary libraries installed: | |
```bash | |
pip install transformers datasets evaluate | |
``` | |
We encourage you to log in to your Hugging Face account so you can upload and share your model with the community. When prompted, enter your token to log in: | |
""" | |
from huggingface_hub import notebook_login | |
notebook_login() | |
"""## Load ELI5 dataset | |
Start by loading a smaller subset of the r/askscience subset of the ELI5 dataset from the 🤗 Datasets library. | |
This'll give you a chance to experiment and make sure everything works before spending more time training on the full dataset. | |
""" | |
from datasets import load_dataset | |
eli5 = load_dataset("eli5", split="train_asks[:5000]") | |
"""Split the dataset's `train_asks` split into a train and test set with the [train_test_split](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.train_test_split) method:""" | |
eli5 = eli5.train_test_split(test_size=0.2) | |
"""Then take a look at an example:""" | |
eli5["train"][0] | |
"""While this may look like a lot, you're only really interested in the `text` field. What's cool about language modeling | |
tasks is you don't need labels (also known as an unsupervised task) because the next word *is* the label. | |
## Preprocess | |
""" | |
#@title | |
from IPython.display import HTML | |
HTML('<iframe width="560" height="315" src="https://www.youtube.com/embed/ma1TrR7gE7I?rel=0&controls=0&showinfo=0" frameborder="0" allowfullscreen></iframe>') | |
"""The next step is to load a DistilGPT2 tokenizer to process the `text` subfield:""" | |
from transformers import AutoTokenizer | |
tokenizer = AutoTokenizer.from_pretrained("distilgpt2") | |
"""You'll notice from the example above, the `text` field is actually nested inside `answers`. This means you'll need to | |
extract the `text` subfield from its nested structure with the [`flatten`](https://huggingface.co/docs/datasets/process.html#flatten) method: | |
""" | |
eli5 = eli5.flatten() | |
eli5["train"][0] | |
"""Each subfield is now a separate column as indicated by the `answers` prefix, and the `text` field is a list now. Instead | |
of tokenizing each sentence separately, convert the list to a string so you can jointly tokenize them. | |
Here is a first preprocessing function to join the list of strings for each example and tokenize the result: | |
""" | |
def preprocess_function(examples): | |
return tokenizer([" ".join(x) for x in examples["answers.text"]]) | |
"""To apply this preprocessing function over the entire dataset, use the 🤗 Datasets [map](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.map) method. You can speed up the `map` function by setting `batched=True` to process multiple elements of the dataset at once, and increasing the number of processes with `num_proc`. Remove any columns you don't need:""" | |
tokenized_eli5 = eli5.map( | |
preprocess_function, | |
batched=True, | |
num_proc=4, | |
remove_columns=eli5["train"].column_names, | |
) | |
"""This dataset contains the token sequences, but some of these are longer than the maximum input length for the model. | |
You can now use a second preprocessing function to | |
- concatenate all the sequences | |
- split the concatenated sequences into shorter chunks defined by `block_size`, which should be both shorter than the maximum input length and short enough for your GPU RAM. | |
""" | |
block_size = 128 | |
def group_texts(examples): | |
# Concatenate all texts. | |
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} | |
total_length = len(concatenated_examples[list(examples.keys())[0]]) | |
# We drop the small remainder, we could add padding if the model supported it instead of this drop, you can | |
# customize this part to your needs. | |
if total_length >= block_size: | |
total_length = (total_length // block_size) * block_size | |
# Split by chunks of block_size. | |
result = { | |
k: [t[i : i + block_size] for i in range(0, total_length, block_size)] | |
for k, t in concatenated_examples.items() | |
} | |
result["labels"] = result["input_ids"].copy() | |
return result | |
"""Apply the `group_texts` function over the entire dataset:""" | |
lm_dataset = tokenized_eli5.map(group_texts, batched=True, num_proc=4) | |
"""Now create a batch of examples using [DataCollatorForLanguageModeling](https://huggingface.co/docs/transformers/main/en/main_classes/data_collator#transformers.DataCollatorForLanguageModeling). It's more efficient to *dynamically pad* the | |
sentences to the longest length in a batch during collation, instead of padding the whole dataset to the maximum length. | |
Use the end-of-sequence token as the padding token and set `mlm=False`. This will use the inputs as labels shifted to the right by one element: | |
""" | |
from transformers import DataCollatorForLanguageModeling | |
tokenizer.pad_token = tokenizer.eos_token | |
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False) | |
"""## Train | |
<Tip> | |
If you aren't familiar with finetuning a model with the [Trainer](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Trainer), take a look at the [basic tutorial](https://huggingface.co/docs/transformers/main/en/tasks/../training#train-with-pytorch-trainer)! | |
</Tip> | |
You're ready to start training your model now! Load DistilGPT2 with [AutoModelForCausalLM](https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoModelForCausalLM): | |
""" | |
from transformers import AutoModelForCausalLM, TrainingArguments, Trainer | |
model = AutoModelForCausalLM.from_pretrained("distilgpt2") | |
"""At this point, only three steps remain: | |
1. Define your training hyperparameters in [TrainingArguments](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.TrainingArguments). The only required parameter is `output_dir` which specifies where to save your model. You'll push this model to the Hub by setting `push_to_hub=True` (you need to be signed in to Hugging Face to upload your model). | |
2. Pass the training arguments to [Trainer](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Trainer) along with the model, datasets, and data collator. | |
3. Call [train()](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Trainer.train) to finetune your model. | |
""" | |
training_args = TrainingArguments( | |
output_dir="my_awesome_eli5_clm-model", | |
evaluation_strategy="epoch", | |
learning_rate=2e-5, | |
weight_decay=0.01, | |
push_to_hub=True, | |
) | |
trainer = Trainer( | |
model=model, | |
args=training_args, | |
train_dataset=lm_dataset["train"], | |
eval_dataset=lm_dataset["test"], | |
data_collator=data_collator, | |
) | |
trainer.train() | |
"""Once training is completed, use the [evaluate()](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Trainer.evaluate) method to evaluate your model and get its perplexity:""" | |
import math | |
eval_results = trainer.evaluate() | |
print(f"Perplexity: {math.exp(eval_results['eval_loss']):.2f}") | |
"""Then share your model to the Hub with the [push_to_hub()](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#transformers.Trainer.push_to_hub) method so everyone can use your model:""" | |
trainer.push_to_hub() | |
"""<Tip> | |
For a more in-depth example of how to finetune a model for causal language modeling, take a look at the corresponding | |
[PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb) | |
or [TensorFlow notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). | |
</Tip> | |
## Inference | |
Great, now that you've finetuned a model, you can use it for inference! | |
Come up with a prompt you'd like to generate text from: | |
""" | |
prompt = "Somatic hypermutation allows the immune system to" | |
"""The simplest way to try out your finetuned model for inference is to use it in a [pipeline()](https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.pipeline). Instantiate a `pipeline` for text generation with your model, and pass your text to it:""" | |
from transformers import pipeline | |
generator = pipeline("text-generation", model="my_awesome_eli5_clm-model") | |
generator(prompt) | |
"""Tokenize the text and return the `input_ids` as PyTorch tensors:""" | |
from transformers import AutoTokenizer | |
tokenizer = AutoTokenizer.from_pretrained("my_awesome_eli5_clm-model") | |
inputs = tokenizer(prompt, return_tensors="pt").input_ids | |
"""Use the [generate()](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.GenerationMixin.generate) method to generate text. | |
For more details about the different text generation strategies and parameters for controlling generation, check out the [Text generation strategies](https://huggingface.co/docs/transformers/main/en/tasks/../generation_strategies) page. | |
""" | |
from transformers import AutoModelForCausalLM | |
model = AutoModelForCausalLM.from_pretrained("my_awesome_eli5_clm-model") | |
outputs = model.generate(inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95) | |
"""Decode the generated token ids back into text:""" | |
tokenizer.batch_decode(outputs, skip_special_tokens=True) |