user
stringlengths
3
28
created_at
timestamp[us]
body
stringlengths
1
173k
issue_number
int64
1
2.57k
HuggingFaceDocBuilderDev
2023-01-13T15:15:21
_The documentation is not available anymore as the PR was closed or merged._
81
HuggingFaceDocBuilderDev
2023-01-07T20:31:47
_The documentation is not available anymore as the PR was closed or merged._
80
lvwerra
2023-01-13T15:10:49
Hi @edbeeching, thanks for updating the notebook! Looks really good, here a few points: - similar to the main sentiment notebook we could just use the `pipeline` for the classification and get rid of `build_bert_batch_from_text` - we can indeed use `accelerate` for the device placement. `ppo_trainer.accelerator.device` should be where you get the device. - if we replace `sentiment_model.to(device)` with `sentiment_model.to(device);` we can avoid printing the whole model graph - I wonder if we should remove most of the logging from the loop for simplicity, what do you think? - All the controlled continuation could be done much easier with the `text-generation` pipeline. I was young and didn't know any better :)
80
lvwerra
2023-01-25T12:03:11
There are still some issues regarding loss spikes (see #101). Will merge for now and investigate further. @natolambert you can use this notebook to see investigate the spikes. Full logs of a run can be found [here](https://wandb.ai/lvwerra/trl/runs/e45k3la0).
80
edbeeching
2023-01-25T14:46:21
@lvwerra and @younesbelkada , thanks for looking at, fixing and merging this. I have gone a bit silent due to pat leave, looking forward to getting back to work :)
80
natolambert
2023-01-30T21:48:56
I am going to make another PR where this notebook is in example form -- much easier for doing multiple jobs and wider scale experimentation. It's also interesting that @edbeeching 's example didn't have the reward spike. I keep finding things to play with, so that's good for now.
80
natolambert
2023-01-28T01:25:12
Should be solved in #80, feel free to re-open if that's not the case.
79
HuggingFaceDocBuilderDev
2023-01-05T11:26:38
_The documentation is not available anymore as the PR was closed or merged._
78
younesbelkada
2023-01-05T11:31:38
Regarding 8-bit Adam, it is quite hard to make it converge. I have found that the model falls rapidly in a collapse mode: https://wandb.ai/distill-bloom/trl/runs/k7vogzao?workspace=user-younesbelkada let me know if it still makes sense to add the example
78
younesbelkada
2023-01-05T14:05:24
Thanks! Let's address the scheduler in a follow up PR!
78
LouisCastricato
2023-01-08T14:12:46
@younesbelkada FYI, 8bit adam converges only after you do a lot of stuff with reward normalization. https://github.com/CarperAI/trlx/issues/53 see here. We also had significant issues getting it working. There was also a recent bug in computing values that we found that I believe was carried over from TRL, I'll have to double check with one of my engineers on this.
78
LouisCastricato
2023-01-08T14:18:38
Nevermind, it appears like the bug is a non issue for TRL.
78
HuggingFaceDocBuilderDev
2023-01-05T11:13:08
_The documentation is not available anymore as the PR was closed or merged._
77
HuggingFaceDocBuilderDev
2023-01-05T10:57:00
_The documentation is not available anymore as the PR was closed or merged._
76
HuggingFaceDocBuilderDev
2023-01-05T09:35:35
The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Ftrl%2Fpr_75). All of your documentation changes will be reflected on that endpoint.
75
SSamDav
2023-01-10T00:00:03
Hi! Nice job supporting the Enc-Dec, I wanted to do it myself but I just saw this PR 😅 Don't know if you need a helping hand but if you want I can help you fixing the code.
75
younesbelkada
2023-01-20T17:41:56
Closing in favor of #93 @SSamDav thanks a lot for opening the PR https://github.com/younesbelkada/trl/pull/1 - it helped us quite a bit to fix the issues we had to enable Enc-Dec models support! We added you as a co-author on #93 💪 Again thanks for your help!
75
SSamDav
2023-01-04T16:40:43
I think I got the answer, It is inside the loss function 😅
73
lvwerra
2023-01-13T15:43:18
I think it is quite common to optimize PPO with small batch sizes but maybe @natolambert or @edbeeching know more if we should change this?
72
natolambert
2023-01-13T16:34:48
Ah, I need to dig through my John Schulman RLHF media tour notes. I vaguely remember the concept coming up. I'm really not sure.
72
lvwerra
2023-01-23T15:00:00
In principle, now that data parallelism via `accelerate` is supported you effectively train with batches of the size of number of GPUs that are used.
72
vwxyzjn
2023-01-28T01:46:53
Hey @lvwerra cool library! PPO can deal with both large and small batch sizes depending on the tasks. The `batch_size` equals to `num_envs * num_steps`, where `num_envs` is the number of envs in RL (maybe the number of conversations in RLHF), and `num_steps` is the number of steps each env steps (maybe the number of responses the same model generates in RLHF in the same sequence of conversation). In IsaacGym / Brax, it's common to use a large `num_envs` and small `num_steps`. E.g., `num_envs=4096` and `num_steps=5`, corresponding to `batch_size=20480`. In Atari, it's common to use smaller `num_envs` and larger `num_steps`. E.g., `num_envs=8` and `num_envs=128`, corresponding to `batch_size=1028`. The instructGPT paper uses `batch_size = 32` ("We use a batch size of 32 for 1.3B and 6B models and 8 for the 175B model." Appendix C3), so I am imagining it's using `num_steps=1` (which also correlates nicely with their bandit environment setting) and 32 prompts as obs, 32 responses as actions, and 32 scalar rewards.
72
xesdiny
2023-01-29T12:08:55
@vwxyzjn `batch_size = num_envs * num_steps` Does this mean this is building multi-envs to collect rollouts? Assuming that the policy_model uses LLM, the `step()` construction forward process is used to perform ppo-clip or ppo-ptx loss backward. In the current code implementation, the out of memory should appear. I have been thinking about this approach , do you consider using ZeRO-Offload to handle the template tensor generated by rollout?
72
lvwerra
2023-01-30T11:51:34
Thanks @vwxyzjn for the clarification of the nomenclature! I think the hyperparameter your are citing are for the initialization of the policy before the PPO training. For the PPO training they mention: > The batch size for each iteration is 512, with a minibatch size of 64. In other words, each batch is randomly split into 8 minibatches and is trained on for only a single inner epoch (Schulman et al., 2017). So indeed a mini-bs>1 is used. I think we can address that quite easily with #100 if we use the attention mask to mask out the appropriate parts of the input. cc @younesbelkada
72
vwxyzjn
2023-02-06T15:22:48
> Does this mean this is building multi-envs to collect rollouts? I think multi-envs in this case is kind of like multiple instances of conversations :) > The batch size for each iteration is 512, Ah, my mistake. Thanks for the info 🙏 > So indeed a mini-bs>1 is used. I think we can address that quite easily with https://github.com/lvwerra/trl/pull/100 if we use the attention mask to mask out the appropriate parts of the input. cc @younesbelkada Sorry, I am probably missing something... What parts of the input should we mask out related to the minibatch size? It sounds like a minibatch of size 64 would mean 64 independent prompts as obs, 64 responses as actions, and 64 scalar rewards. We are trying to mask out the future tokens in each of these 64 prompts, right?
72
lvwerra
2023-02-07T09:47:58
@vwxyzjn mostly a practical thing: when we batch 64 sequences together which can have unequal length we need to pad the tensors. In transformers the tensors then usually come with an attention mask telling you where the paddings are: we can use this to know where each prompt/response starts and ends and where the paddings are we can ignore.
72
younesbelkada
2023-01-04T20:11:08
Hi, yes we are currently refactoring the repository to make it more accessible for more models & to do distributed training if you want to use the examples on the notebook please use `trl` from the previous release `pip install trl` Check #64
71
HuggingFaceDocBuilderDev
2023-01-04T09:16:34
_The documentation is not available anymore as the PR was closed or merged._
70
HuggingFaceDocBuilderDev
2023-01-01T08:23:29
_The documentation is not available anymore as the PR was closed or merged._
69
HuggingFaceDocBuilderDev
2022-12-31T06:49:33
_The documentation is not available anymore as the PR was closed or merged._
68
lewtun
2023-01-05T12:23:33
Thanks for the comments @lvwerra ! I left a few questions that could do with your feedback - in the meantime I'll add some tests :)
68
lewtun
2023-01-23T15:52:16
🔴 Don't merge until I have a fix! Hmm, using the staging endpoint of the Hub for the test is causing some issues because I rely on `whoami()` to get the username in the model card, and that method doesn't allow me to distinguish between endpoints
68
HuggingFaceDocBuilderDev
2022-12-30T10:28:17
_The documentation is not available anymore as the PR was closed or merged._
67
HuggingFaceDocBuilderDev
2022-12-30T10:00:41
_The documentation is not available anymore as the PR was closed or merged._
66
lvwerra
2022-12-30T10:03:15
This should also address #42
66
HuggingFaceDocBuilderDev
2022-12-30T08:56:32
_The documentation is not available anymore as the PR was closed or merged._
65
LouisCastricato
2023-01-08T16:58:18
BTW, I can confirm that SetFit does make for a really good zero shot RM. There are some issues with using contrastive models as RMs though. It often requires very careful data cleaning and identifying what kinds of clusters work as RMs is a dark art to the point where we decided that it wasn't worth seriously pursing further after CARP CoOp. Rerank models are much better.
64
TristanThrush
2023-01-19T19:29:26
I think that the "coolest" dataset we can use to train a model could be https://huggingface.co/datasets/openai/webgpt_comparisons, but it is hard to evaluate this sort of model after we train it. I might start by adding a summarization example, and then some decent ways by which it can be evaluated. Then the webgpt comparisons example
64
AlexWortega
2023-01-24T17:56:46
https://colab.research.google.com/drive/1hkPBFtMP5xBAjNYMjWH7NqYn118kRLOJ?usp=sharing I am trying to implement own gpt + trl with QA retrival reward, but i think something is wrong with reward/or generation
64
natolambert
2023-02-07T01:15:06
@AlexWortega can you open a separate issue / PR for this? Looks interesting, but may get loss in this big 1.0 roadmap thread.
64
lvwerra
2023-02-07T09:38:15
We ended up calling this release `0.2` (not `1.0`). I am closing the issue and will move the open tasks to a new issue.
64
AlexWortega
2023-02-16T08:58:42
Hi @lvwerra i opened PR https://github.com/lvwerra/trl/pull/149 with this feature(?) idea
64
HuggingFaceDocBuilderDev
2022-12-29T17:19:48
_The documentation is not available anymore as the PR was closed or merged._
63
HuggingFaceDocBuilderDev
2022-12-30T08:56:09
_The documentation is not available anymore as the PR was closed or merged._
62
lvwerra
2022-12-30T08:59:44
All comments should be addressed. Also applied the quality to the recent merges.
62
HuggingFaceDocBuilderDev
2022-12-30T08:42:02
_The documentation is not available anymore as the PR was closed or merged._
61
HuggingFaceDocBuilderDev
2022-12-27T17:59:06
The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Ftrl%2Fpr_59). All of your documentation changes will be reflected on that endpoint.
59
younesbelkada
2022-12-29T11:55:32
wandb run (multi-GPU) after the latest commit: https://wandb.ai/distill-bloom/trl/runs/1mps4h09?workspace=user-younesbelkada
58
younesbelkada
2022-12-29T17:28:09
Wandb log of the final run: https://wandb.ai/distill-bloom/trl/runs/dcd2gqn1?workspace=user-younesbelkada
58
HuggingFaceDocBuilderDev
2022-12-29T17:28:46
_The documentation is not available anymore as the PR was closed or merged._
58
lvwerra
2023-01-13T15:39:35
Regarding 1: see equation (11) in https://arxiv.org/abs/1506.02438 and 2) yes you are correct.
57
lvwerra
2023-01-13T15:35:52
It seems like the reward of your model increases, no? So maybe worth investigating if the classifier actually works well?
56
lvwerra
2023-01-13T15:40:17
Also the KL-divergence is allowed to raise but the controller should at some point bring it back down.
56
lvwerra
2022-12-21T07:38:31
Coming soon - see #53!
54
22Mukesh22
2022-12-22T05:48:51
That's Great , waiting for GPT-J to learn through human feedback ? But what in your thought, Bert classifier will be able to reward the text generated ?? Or there will be any other reward model who can give the score for the generated task.
54
conceptofmind
2022-12-28T03:35:55
Are we able to use any Causal LLM from the model hub now that #53 is merged?
54
lvwerra
2023-01-13T15:25:51
Yes, that should work!
54
younesbelkada
2022-12-21T12:17:55
Seems to be converging with the latest changes: https://wandb.ai/distill-bloom/gpt2-test/runs/1sxufahx?workspace=user-younesbelkada
53
younesbelkada
2022-12-19T21:25:11
Moved all images inside the org https://huggingface.co/trl-internal-testing and fixed all image links on README + notebooks with the correct ones Also as discussed, I removed the 3 first notebooks ;) Let me know what is missing here!
52
lvwerra
2022-12-20T08:48:43
Seems not possible https://stackoverflow.com/questions/66587174/how-to-remove-generated-from-tag
52
younesbelkada
2022-12-20T08:50:41
Thanks for the review! I should have removed the CI, done the renaming of the files ;-)
52
younesbelkada
2022-12-14T13:37:29
For now I am testing my implementation with `accelerate launch example/ppo-accelerate.py`
50
younesbelkada
2022-12-15T10:49:55
Regarding tests, this is tricky but from what I can see we can for now: - Test if all trainers respects the inheritance from `BaseTrainer` (by checking if all the needed functions are implemented) - Test if all models work as expected (thinking of `generate` method) and if we can in fact support all `xxxForCausalLM` architectures as claimed above. From what I can see, as long as the model has a proper `generate` method the PPOTrainer should work
50
younesbelkada
2022-12-27T12:50:55
Closing in favor of https://github.com/lvwerra/trl/pull/58
50
lvwerra
2022-12-07T09:30:10
Thanks, I'll fix that!
48
lvwerra
2023-01-30T11:59:33
Should be fixed with #80.
48
lvwerra
2022-12-07T09:30:28
Thanks, I'll fix that! 🤗
47
lvwerra
2022-12-21T10:29:36
Closed with #49
47
Alymostafa
2022-11-18T03:48:50
Try to work on a new env and install the transformers library again. Also, make sure to load and import pyarrow.
46
lvwerra
2022-12-07T09:31:26
This seems like an issue with the `tokenizers` library. Can you install it `pip install tokenizers` alone?
46
lvwerra
2022-12-07T09:43:50
Thanks, the README is from `nbs/index.ipynb` so this is a limitation of `nbdev`. Might remove that in the next iteration.
45
JulesGM
2022-12-07T16:46:43
weird that nbdev doesn't do that, maybe sending a pull request their way would be good On Wed, Dec 7, 2022 at 4:44 AM Leandro von Werra ***@***.***> wrote: > Thanks, the README is from nbs/index.ipynb so this is a limitation of > nbdev. Might remove that in the next iteration. > > — > Reply to this email directly, view it on GitHub > <https://github.com/lvwerra/trl/pull/45#issuecomment-1340666778>, or > unsubscribe > <https://github.com/notifications/unsubscribe-auth/AAYU34NJRTBJ77WQRXWYN4DWMBL6DANCNFSM6AAAAAARCXKCKU> > . > You are receiving this because you authored the thread.Message ID: > ***@***.***> >
45
lvwerra
2023-01-30T12:05:38
Interesting, you might be right! I'll have a look at this :)
44
lvwerra
2023-02-07T15:09:29
Should be fixed now :)
44
clam004
2022-08-30T22:23:27
So I did some research on my own and basically my first 2 questions can be answered by looking at the huggingface transformers repository: https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py
43
danjohnvelasco
2022-09-08T01:32:44
> So I did some research on my own and basically my first 2 questions can be answered by looking at the huggingface transformers repository: https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_utils.py Hi @clam004, do you mind explaining your answer/understanding on why they do it? Thanks!
43
clam004
2022-12-14T22:00:52
@danjohnvelasco as long as you use the same name `self.lm_head`, when you load the pretrained model from the dictionary of parameters, these linear parameters will be replaced with the trained ones. So thats why the model still works (question 2). Also regarding question 3, I suspect somehow it doesnt matter, although Im not sure why, cause when I run this repo without the dropout layer, as expected, it behaves the same.
43
lvwerra
2023-01-13T15:33:57
Regarding 3 I agree and we moved the dropout before the linear layer in https://github.com/lvwerra/trl/pull/70.
43
lvwerra
2022-12-07T09:41:54
Soon? :P
42
lvwerra
2022-12-30T10:03:49
Closing this in favour of #66. Let me know if you had something else in mind and we can re-open :)
42
MichaelKarpe
2023-01-08T17:30:25
Hey, sorry for not coming back sooner on this with an explanation, I wanted to provide evidence the proposed changes were necessary as it was a change in the requirements. If I remember well, I needed `transformers>=4.15.0` and I couldn't make it work without `wandb>=0.12.17`. The `wandb>=0.12.17` change could eventually still be needed, this is not urgent however to make this change as an installation from scratch should install the most recent version. I will eventually check later that the project cannot work without `wandb>=0.12.17`, but this time I am not providing a timeline on when I'll check this! :slightly_smiling_face:
42
parshinsh
2022-09-19T15:59:20
I confirm that this issue happens. I'm facing the same problem with my own task. Can anyone help with this?
41
Alymostafa
2022-10-31T03:12:31
same problem here with a longer sequence. @vblagoje @lvwerra
41
Alymostafa
2022-11-18T03:45:49
@adhitya-synth I used the same configuration as you mentioned and I found out that when the batch size is small it happens as you said but with a larger batch size as in the notebook, the reward increases.
41
hdvvip
2022-11-18T03:58:30
Recently, I came across OpenAI InstructGPT which is an upgrade version of GPT-3 that has been trained with reinforcement learning. The reinforcement learning they used for training InstructGPT is PPO which is implemented in this github repository. Related to the problem that the reward is stagnant or going down, I think even OpenAI (fathers of PPO) also face the same issue. Please see the Figure 13 below. "As shown in Figure 13, the reward saturates after the initial 400k examples of training." ![Selection_1566](https://user-images.githubusercontent.com/42698038/202613363-c47bc6c4-cc30-45f6-b8de-30d436a6b687.png) Here is InstructGPT paper. https://arxiv.org/pdf/2203.02155.pdf
41
hdvvip
2022-11-18T04:01:20
Thus, based on the OpenAI experiments in InstructGPT paper, I think that it's based on the dataset you used to train your model. In OpenAI case, with the best implementation of PPO, they still failed to improve the rewards when they train GPT-3 using PPO on FLAN and T0 datasets. ![Selection_1567](https://user-images.githubusercontent.com/42698038/202613922-a35816a5-a367-40a6-a6bf-72ca71c04322.png)
41
hdvvip
2022-11-18T04:20:16
Thus, if you used PPO on your task and it doesn't work. Don't be surprised! Like I said above, some tasks PPO will work. Some tasks, it won't.
41
Alymostafa
2022-11-18T05:12:10
Thanks for the clarification. But, I am mentioning that based on his observations when the batch size is small what he mentioned happens, but when I increased the batch size I was able to reproduce the same results as in the notebook.
41
hdvvip
2022-11-18T05:46:25
Well, I think we have some misunderstanding here. I didn't specifically mention you in post. I just want to explain to everyone here that depend on your tasks, PPO may work or not. So, it's not your fault when PPO failed on your NLP task. Everyone here has different tasks, so my answer didn't have anything to do with batch size. BTW, OpenAI used batch size of 128 but still failed.
41
lvwerra
2022-12-07T09:37:03
Thanks for the discussion here. Indeed, it can depend a lot on the hyperparameters as well as the task. Great you found that increasing the BS works. I think this is still a very underexplored area!
41
leoribeiro
2023-03-22T21:32:32
@adhitya-synth I face the same problem when using larger text. Did you figure it out a way to overcome this?
41
hdvvip
2022-07-18T04:39:23
Ok I understood, you used [logprob](https://github.com/lvwerra/trl/blob/4fe9988eb8adf0227c26432f8eb3e57a66556350/trl/ppo.py#L156) of the current network as theta_old train_stats = self.train_minibatch(logprobs[idx].unsqueeze(0), values[idx].unsqueeze(0), rewards[idx].unsqueeze(0), queries[idx].unsqueeze(0), responses[idx].unsqueeze(0), torch.cat([queries[idx],responses[idx]]).unsqueeze(0)) This works similarly to update theta_old after every iteration.
40
Alymostafa
2022-11-18T03:46:38
What is the value of the Batch size you use?
38
lvwerra
2023-01-13T15:29:37
See #41
38
lvwerra
2022-12-07T09:44:06
Will have a look!
37
22Mukesh22
2022-12-22T05:46:46
Hi @lvwerra Any fix on the above error ? I was running the notebook '04-gpt2-sentiment-ppo-training.ipynb' for the first time, and received a Key Error when running the training loop section. It was in this line: rewards = torch.tensor([output[1]["score"] for output in pipe_outputs]).to(device) I presume it is safe to omit the '[1]'? rewards = torch.tensor([output["score"] for output in pipe_outputs]).to(device)
37
lvwerra
2023-01-30T12:06:05
It should be fixed now!
37
lvwerra
2022-05-15T16:13:36
Also this PR finally fixes the tests.
35
lvwerra
2022-05-15T15:58:01
This should be in principle possible, maybe this needs some modifications to the `PPOTrainer` but you can probably treat the decoder of an encoder-decoder architecture such as BART or T5 like the GPT-2 decoder. This was also requested in #13 and #23. Feel free to open a PR if you have a working solution!
33
lvwerra
2022-12-07T09:40:34
You should be using the same class to load the model e.g. `GPT2HeadWithValueModel` or `AutoModelForCausalLM` (although I haven't tested the latter). `AutoModel` will load the model without LM head.
32