user
stringlengths 3
28
| created_at
timestamp[us] | body
stringlengths 1
173k
| issue_number
int64 1
2.54k
|
---|---|---|---|
HuggingFaceDocBuilderDev | 2025-01-03T09:44:45 | The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Ftrl%2Fpr_2540). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,540 |
faaany | 2025-01-03T02:37:30 | @qgallouedec @lewtun @yao-matrix | 2,533 |
yiyepiaoling0715 | 2024-12-30T08:03:56 | ![image](https://github.com/user-attachments/assets/382ffa50-f3c6-4cd9-aabd-27e882409ed3)
| 2,532 |
qgallouedec | 2024-12-30T14:49:29 | > * [x] Any code provided is minimal, complete, and reproducible ([more on MREs](https://docs.github.com/en/get-started/writing-on-github/working-with-advanced-formatting/creating-and-highlighting-code-blocks))
can you please minimise your code? It seems like the error occurs at generation; what the input of the model here?:
```
| | 2024-12-30 10:53:44.559 | [rank4]: File "/opt/conda/lib/python3.11/site-packages/transformers/generation/utils.py", line 3254, in _sample |
| | 2024-12-30 10:53:44.559 | [rank4]: outputs = model_forward(**model_inputs, return_dict=True) |
```
Can you reproduce the error without all the training logic? | 2,532 |
yiyepiaoling0715 | 2024-12-30T04:55:00 | same question,how to resolve thie? | 2,529 |
HuggingFaceDocBuilderDev | 2024-12-28T13:27:00 | The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Ftrl%2Fpr_2527). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,527 |
August-murr | 2024-12-28T06:35:20 | I recommend using GitHub Actions since they run the tests more reliably. Just enable it on your fork, push your changes, and it’ll automatically trigger the tests. | 2,524 |
AMindToThink | 2024-12-28T19:48:24 | Does this mean that my environment is not set up incorrectly? | 2,524 |
AMindToThink | 2024-12-29T03:05:06 | Thank you, took a while to figure out, but the tests that were triggered when I made an empty .py file in trl/trl worked. Somewhat bothersome that it tries and fails to post the results to slack, but the tests themselves pass.
`Error: Need to provide at least one botToken or webhookUrl`
I would appreciate if the [contributing](https://github.com/huggingface/trl/blob/main/CONTRIBUTING.md) document explained that the tests may not run properly locally and are auto-run by Github when changes are pushed to main.
My workflow will be:
Make changes to a branch of my fork.
When I want to test, I'll merge my branch into main.
Github will run the tests
They'll fail.
If on inspection the failure is because of the slack upload attempt, then everything is fine.
If on inspection there was an error before the slack upload attempt, then there's a problem with my code.
If my code is fine and my feature is ready, I can make a pull request. | 2,524 |
qgallouedec | 2024-12-29T10:19:28 | Which tests fail locally? | 2,524 |
AMindToThink | 2024-12-30T19:00:10 | Oddly, it says 6 failed when I only see 5.
I'm on this commit:
`commit aed5da580e9fcba6517460daf65106bc42fb6167 (upstream/main, origin/sac, sac)
Author: Quentin Gallouédec <[email protected]>
Date: Sun Dec 22 12:44:07 2024 +0100`
` 📦 Packing documentation (#2503)`
These are the failures:
```
[gw2] FAILED tests/test_dpo_trainer.py::DPOTrainerTester::test_dpo_lora_bf16_autocast_llama
[gw11] FAILED tests/test_gkd_trainer.py::GKDTrainerTester::test_gkd_trainer
[gw12] FAILED tests/test_callbacks.py::WinRateCallbackTester::test_basic
[gw11] FAILED tests/test_peft_models.py::PeftModelTester::test_create_bnb_peft_model_from_config
[gw15] FAILED tests/test_xpo_trainer.py::TestXPOTrainer::test_training_with_peft | 0/50 [00:00<?, ?it/s]
================== 6 failed, 345 passed, 25 skipped, 242 warnings, 45 rerun in 113.62s (0:01:53) ===================
```
| 2,524 |
umbilnm | 2024-12-27T09:11:21 | Fixes #2400 | 2,521 |
umbilnm | 2024-12-29T13:33:30 | @qgallouedec Hello, can you merge? or something else needed from me? | 2,521 |
HuggingFaceDocBuilderDev | 2024-12-26T19:07:53 | The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Ftrl%2Fpr_2520). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,520 |
oliveiraeliel | 2024-12-28T02:19:22 | Hi, I have the same question as you do.
I think that there must be some easy way to simply write a reward function as a `nn.Module`, so we don't have to refactor anything, but I didn't tried it yet.
But I also think that `PPOTrainer` should accept a `custom_get_reward_function` as an optional parameter. In this case, anyone could define its own reward function, and would be a clean solution. | 2,518 |
nityadav | 2024-12-29T19:24:39 | @yananchen1989 Thanks for posting this as I was stuck with a similar issue (but for `OnlineDPOTrainer`). The easiest workaround for me was to subclass the trainer class (`OnlineDPOTrainer`) and override the `training_step` with my custom `get_reward` logic, and rest of the implementation being the same as in the original method. | 2,518 |
August-murr | 2024-12-30T18:28:29 | @yananchen1989 @oliveiraeliel @nityadav @hwhyyds @schmidtj3
This has been a recurring question, so before implementing a solution, I would like to ask you all for examples of when you would need this feature so that we can think of a good solution. | 2,518 |
yananchen1989 | 2024-12-30T18:51:22 | correct me if i am wrong.
would like to know the primary motivation to rewrite the dpo from older version to current trainer unified version. maybe for better efficiency ?
i understand that recent TRL versions wants to unify the pipeline in a more neat and organized manner across these different RL methods, where Trainer is the pivotal module and kick off the trainer.train() and all set.
so for some methods like ppo the reward module is needed, it is also directed passed into the trainer. while for say dpo, sft, there is no provision for reward module.
however. this could cause excessive encapsulation since it if hard to modularize the the reward module.
the core reason is that in practical cases, reward module can be of any form not just a single torch.nn module which just score the whole output. the reward module may be a mixture and may be of dependence on external parameters, the prompt and most importantly it can not score the ppo trainer' outputs in a batch mode.
anyway the flexibility is significantly reduced.
although as you know the current unified pipeline is very fine with other mothods such as dpo as they do not have the reward concerns and the reward module is implicitly expressed within the algorithm.
in my view, there is no need to rigidly transfer these rl methods into a unified training framework.
pls advise. | 2,518 |
August-murr | 2024-12-31T06:52:17 | Ultimately, TRL is a Hugging Face library built on top of Transformers and is part of the Hugging Face ecosystem. If the Trainer does limit flexibility, then Transformers will need to adapt; otherwise, we will have to maintain a much larger and more complex codebase.
We'll come up with a way to add these features and prepare a PR soon! | 2,518 |
August-murr | 2024-12-31T06:52:44 | @qgallouedec, do you want to comment?
| 2,518 |
qgallouedec | 2024-12-31T07:30:41 | Maybe having a `reward_func` arg of type `Callable` is an option.
Alternatively, releasing the type of `reward_model` to accept any `Callable` is also an option. But given that a custom reward func won't return the same type/shape as proper `reward_model` I'm a bit afraid that it would require overcomplicated logic.
In any case, I believe that the best approach is to discuss around a PR if anyone is willing to propose their approach | 2,518 |
yananchen1989 | 2024-12-31T12:57:59 | i hear u. thanks | 2,518 |
dawidm | 2024-12-27T20:40:20 | Update: this approach (PR #2516) introduce another problem, because incrementing `self.state.global_step` by more than 1 needs parameters like `logging_steps` be divisible by the value of the increment. Solutions for this are:
1. Require `logging_steps` etc. to be divisible by `args.num_mini_batches * args.num_ppo_epochs`.
2. Change convention for what `step` is in RLOO - don't multiply `self.state.max_steps` by `args.num_mini_batches * args.num_ppo_epochs` (making `step` an equivalent of `episode`).
I prefer the second one because it's simpler, but I'd appreciate comments on this. I'll update the PR.
edit: 2. is also consistent with documentation:
> episode: episode: The current global step or episode count in the training process. | 2,515 |
dawidm | 2024-12-29T13:12:02 | Of course there's also 3. solution: update `global_step` after actual optimizer step (inside minibatch PPO loop), but also logging should have been moved here in this case. This will keep the most "correct" (i think) convention of steps but it requires the most changes. | 2,515 |
SwayamInSync | 2024-12-21T19:58:52 | This accounted with `SFTTrainer` if this is a general issue with `Trainer` from transformers, can be re-located there | 2,514 |
HuggingFaceDocBuilderDev | 2024-12-21T12:12:26 | The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Ftrl%2Fpr_2513). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,513 |
HuggingFaceDocBuilderDev | 2024-12-21T00:10:35 | The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Ftrl%2Fpr_2512). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,512 |
HuggingFaceDocBuilderDev | 2024-12-20T23:42:15 | The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Ftrl%2Fpr_2511). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,511 |
HuggingFaceDocBuilderDev | 2024-12-20T21:43:27 | The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Ftrl%2Fpr_2510). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,510 |
HuggingFaceDocBuilderDev | 2024-12-20T16:10:32 | The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Ftrl%2Fpr_2509). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,509 |
metric-space | 2024-12-21T21:30:33 | @aivolcano There is a notebook that is related to this. The updated notebook is here: https://github.com/huggingface/trl/blob/main/examples/notebooks/best_of_n.ipynb | 2,508 |
aivolcano | 2024-12-27T08:53:25 | thank u so much
| 2,508 |
HuggingFaceDocBuilderDev | 2024-12-20T11:30:43 | The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Ftrl%2Fpr_2507). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,507 |
Mecoli1219 | 2024-12-20T06:46:11 | Wait for https://github.com/linkedin/Liger-Kernel/pull/492 | 2,506 |
HuggingFaceDocBuilderDev | 2025-01-03T16:00:20 | The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Ftrl%2Fpr_2506). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,506 |
kashif | 2025-01-03T19:54:41 | needs: https://github.com/linkedin/Liger-Kernel/pull/510 | 2,506 |
metric-space | 2024-12-21T21:33:11 | @nguyenhoa-uit I can help out with this as this was code I wrote more than a year ago. Mind you, I'll be very very slow. Let me take a look | 2,505 |
metric-space | 2024-12-23T09:46:31 | @nguyenhoa-uit could you try this bit : https://github.com/huggingface/trl/blob/main/trl/trainer/ddpo_config.py#L64 ? | 2,505 |
nguyenhoa-uit | 2024-12-25T02:18:37 |
> @nguyenhoa-uit could you try this bit : https://github.com/huggingface/trl/blob/main/trl/trainer/ddpo_config.py#L64 ?
When I used checkpoint resume from in config file, I ran and had a bug at https://github.com/huggingface/trl/blob/main/trl/trainer/ddpo_trainer.py#L541C20-L541C42
When I passed with try catch, it didnot use the parameters from this checkpoint but base model.
| 2,505 |
ggbetz | 2024-12-20T15:19:13 | It seems @philschmid has in implementation here: https://github.com/philschmid/deep-learning-pytorch-huggingface/blob/391f19ba06c128a2a290b3bdcb717ad6ff794fd7/training/scripts/run_sft.py#L54-L77 and the question is maybe just what's the best cleanest way to integrate this natively in trl? | 2,504 |
anakin87 | 2024-12-21T16:25:29 | This would be great and would prevent users from making mistakes in the manual implementation of this method: for example, [the code for integration with other libraries reported in the official repo](https://github.com/cognitivecomputations/spectrum?tab=readme-ov-file) has some problems. In contrast, the simple implementation in [my tutorial](https://huggingface.co/blog/anakin87/spectrum) and Philipp's code should be correct.
BTW, Spectrum is quite agnostic with respect to training method (SFT, DPO...): the [models by VAGO solutions](https://huggingface.co/VAGOsolutions) show that it works well for DPO too.
LMK what's the better way to proceed and help with this integration. | 2,504 |
HuggingFaceDocBuilderDev | 2024-12-19T10:50:45 | The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Ftrl%2Fpr_2503). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,503 |
HuggingFaceDocBuilderDev | 2024-12-19T10:13:19 | The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Ftrl%2Fpr_2502). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,502 |
qgallouedec | 2024-12-23T12:38:06 | Can you screenshot a result? | 2,501 |
HuggingFaceDocBuilderDev | 2024-12-23T12:41:22 | The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Ftrl%2Fpr_2501). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,501 |
yaricom | 2024-12-23T12:43:48 | Sure, here is s screenshot from my account at Comet.
<img width="2106" alt="Screenshot 2024-12-23 at 14 42 20" src="https://github.com/user-attachments/assets/69629fdb-77de-4a2d-b1d2-087889d96a4c" />
| 2,501 |
yaricom | 2024-12-23T12:45:02 | And this is a DataFrame encoded as CSV.
[game_log.csv](https://github.com/user-attachments/files/18229453/game_log.csv)
| 2,501 |
yaricom | 2024-12-23T13:08:10 | The script I was using to test DPO trainer integration.
```python
import os
from datasets import load_dataset
from transformers import AutoModelForCausalLM, AutoTokenizer
from trl import DPOConfig, DPOTrainer
os.environ["TOKENIZERS_PARALLELISM"] = "false"
def main():
output_dir = "models/minimal/dpo_my"
model_id = "trl-internal-testing/tiny-Qwen2ForCausalLM-2.5"
# model_id = "Qwen/Qwen2-0.5B-Instruct"
model = AutoModelForCausalLM.from_pretrained(model_id)
ref_model = AutoModelForCausalLM.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
tokenizer.pad_token = tokenizer.eos_token
training_args = DPOConfig(
output_dir=output_dir,
per_device_train_batch_size=2,
max_steps=1,
remove_unused_columns=False,
gradient_accumulation_steps=8,
precompute_ref_log_probs=False,
learning_rate=5.0e-7,
eval_strategy="steps",
eval_steps=1,
report_to="all",
generate_during_eval=True,
max_length=1024,
)
# dummy_dataset = load_dataset("trl-internal-testing/zen", "standard_preference")
dummy_dataset = load_dataset("trl-lib/ultrafeedback_binarized", "default")
dummy_dataset["train"] = dummy_dataset["train"].select(range(20))
dummy_dataset["test"] = dummy_dataset["test"].select(range(40))
trainer = DPOTrainer(
model=model,
ref_model=ref_model,
args=training_args,
processing_class=tokenizer,
train_dataset=dummy_dataset["train"],
eval_dataset=dummy_dataset["test"],
)
trainer.train()
trainer.evaluate()
if __name__ == "__main__":
main()
```
Do not forget to set `COMET_APY_KEY` environment variable while executing it.
| 2,501 |
asparius | 2024-12-18T13:50:40 | trl uses accelerate which supports FSDP. However there is no recommeded config of FSDP in the repo unlike DeepSpeed, so you could refer to this [page](https://huggingface.co/docs/accelerate/en/usage_guides/fsdp) for FSDP. All in all, DPO and trl supports FSDP but not for online algo like PPO #1726. | 2,500 |
yingtongxiong | 2024-12-19T05:57:01 | > trl uses accelerate which supports FSDP. However there is no recommeded config of FSDP in the repo unlike DeepSpeed, so you could refer to this [page](https://huggingface.co/docs/accelerate/en/usage_guides/fsdp) for FSDP. All in all, DPO and trl supports FSDP but not for online algo like PPO #1726.
@asparius Thank you very much | 2,500 |
HuggingFaceDocBuilderDev | 2024-12-17T23:16:28 | The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Ftrl%2Fpr_2499). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,499 |
HuggingFaceDocBuilderDev | 2024-12-17T19:12:30 | The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Ftrl%2Fpr_2498). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,498 |
qgallouedec | 2024-12-17T22:30:39 | Yeah! thanks @sergiopaniego 🤘 | 2,498 |
asparius | 2024-12-18T14:14:35 | This has been noted previously #2281. I believe this was introduced in PPOv2 which was replication of the openai tldr paper which also contains this INVALID_LOGPROB=1.0 which does not break training because it cancels out at kl reward. Perhaps @vwxyzjn can tell why this was used, instead of masked_mean version | 2,496 |
Mecoli1219 | 2024-12-20T05:30:02 | Hi, I want to check that SimPO is in CPO instead of DPO, right? | 2,495 |
qgallouedec | 2024-12-20T11:01:35 | > Hi, I want to check that SimPO is in CPO instead of DPO, right?
Correct! Message modified | 2,495 |
HuggingFaceDocBuilderDev | 2024-12-17T08:16:37 | The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Ftrl%2Fpr_2494). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,494 |
qgallouedec | 2024-12-17T11:19:51 | Probably simpler:
```python
from huggingface_hub import ModelCard
model_card = ModelCard("""
---
tags: [trl]
---
# Some title
""")
if script_args.push_to_hub:
model_card.push_to_hub(script_args.repo_id, repo_type="dataset")
```
| 2,491 |
August-murr | 2024-12-17T12:15:50 | Well, that's one way to overengineer it
I also opened [issue on datasets](https://github.com/huggingface/datasets/issues/7336) to clarify.
I assume the next step is to add this to all the dataset scripts. | 2,491 |
qgallouedec | 2024-12-17T13:14:11 | Very good like this | 2,491 |
HuggingFaceDocBuilderDev | 2024-12-25T17:41:32 | The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Ftrl%2Fpr_2491). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,491 |
August-murr | 2024-12-29T14:01:58 | it doesn't add all the details requested in the issue #2470 but It's an improvement | 2,491 |
qgallouedec | 2024-12-16T12:14:28 | Thanks for reporting, please provide a *minimal* code/steps to reproduce this. | 2,490 |
sagie-dekel | 2024-12-16T12:48:53 | pipeline.zip (edit by maintainer: remove link)
thanks @qgallouedec
The attached files constitute a pipeline that using the DPOTrainer with DeepSpeed.
I am sorry that its ain't minimal but i don't see easy way to reproduce. if you prefer I can write the main steps. | 2,490 |
qgallouedec | 2024-12-16T13:52:12 | Sorry but we don't use zip files. The easy way to provide a MRE is to go line by line, if the error remains when you remove it, then you can discard the line. When there is no line left to remove, you have your MRE | 2,490 |
sagie-dekel | 2024-12-16T16:16:47 | Sorry @qgallouedec , here a minimal code of my pipeline:
```python
import argparse
import json
import math
import pandas as pd
import torch
from accelerate import Accelerator
from datasets import Dataset
from torch import optim, nn
from torch.optim.lr_scheduler import LambdaLR
from tqdm import tqdm
from transformers import AutoTokenizer, AutoModelForMaskedLM, AutoModel, AutoModelForCausalLM
from trl import PPOConfig, PPOTrainer, AutoModelForCausalLMWithValueHead, DPOConfig, DPOTrainer
from huggingface_hub import login
import os
import itertools
model_RLRF = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.1-8B-Instruct", torch_dtype=torch.float32)
tokenizer_RLRF = AutoTokenizer.from_pretrained(model_RLRF_name_or_path)
tokenizer_RLRF.add_special_tokens({'pad_token': tokenizer_RLRF.eos_token})
tokenizer_RLRF.padding_side = 'left'
DPO_config = DPOConfig(
report_to='tensorboard',
logging_first_step=True,
"per_device_train_batch_size": 4,
"gradient_accumulation_steps": 1,
"sync_ref_model": true,
"ref_model_mixup_alpha": 0.6,
"ref_model_sync_steps": 256,
"bf16": True,
)
# Create reference model:
parameter_names = [n for n, _ in model_RLRF.named_parameters()]
ref_model = deepcopy(model)
# if no layers are shared, return copy of model
for param_name in parameter_names:
param = ref_model.get_parameter(param_name)
param.requires_grad = False
ref_model.eval()
# Set optimizer for RLRF
optimizer_RLRF = optim.AdamW(filter(lambda param: param.requires_grad, model_RLRF.parameters()),
lr=1.41e-5)
train_dataset= pd.read_csv("perfernces_dataset_from_ranker_train_queries_and_baseline_doc.csv")
train_dataset= Dataset.from_pandas(train_dataset_RLRF)
dpo_trainer = DPOTrainer(model=model, args=DPO_config, processing_class=tokenizer_RLRF, ref_model=ref_model,
optimizers=(optimizer, None), train_dataset=train_dataset)
dpo_trainer.train()
```
the loaded data file (train_dataset) is:
[perfernces_dataset_from_ranker_train_queries_and_baseline_doc.csv](https://github.com/user-attachments/files/18153383/perfernces_dataset_from_ranker_train_queries_and_baseline_doc.csv) | 2,490 |
qgallouedec | 2024-12-16T11:04:09 | Good point, given that for other trainers (like DPO), it's a truncation.
In fact, the best thing would be to have a common behavior for all trainers (truncation), but the urgent thing is to clarify the documentation. | 2,488 |
HuggingFaceDocBuilderDev | 2024-12-16T09:16:23 | The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Ftrl%2Fpr_2487). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,487 |
Ciao-CA | 2024-12-20T07:32:59 | I have the same problem | 2,486 |
karlcuinju | 2025-01-02T03:11:41 | Any solution now? | 2,486 |
HuggingFaceDocBuilderDev | 2024-12-15T19:39:31 | The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Ftrl%2Fpr_2485). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,485 |
HuggingFaceDocBuilderDev | 2024-12-15T18:22:29 | The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Ftrl%2Fpr_2484). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,484 |
HuggingFaceDocBuilderDev | 2024-12-15T16:35:22 | The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Ftrl%2Fpr_2483). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,483 |
HuggingFaceDocBuilderDev | 2024-12-15T12:58:48 | The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Ftrl%2Fpr_2482). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,482 |
qgallouedec | 2024-12-15T15:34:21 | 2 questions/remarks:
- can you run benchmark so that we can (1) quantify the improvement and (2) check that results with and without liger are the same
- we could have an additional tag for the hub when a model is trained with liger
| 2,482 |
qgallouedec | 2024-12-15T15:48:46 | I think we should bump liger version to v0.5 (it doesn't include the loss before), see https://github.com/linkedin/Liger-Kernel/releases/tag/v0.5.0 | 2,482 |
kashif | 2024-12-18T10:46:56 | waiting on https://github.com/linkedin/Liger-Kernel/pull/486 | 2,482 |
kashif | 2024-12-19T10:09:55 | waiting on https://github.com/huggingface/trl/pull/2502
| 2,482 |
qgallouedec | 2024-12-19T10:33:44 | @kashif can you share the curves once it's ready? | 2,482 |
kashif | 2024-12-29T14:45:28 | tests fail as they need: https://github.com/linkedin/Liger-Kernel/pull/503 | 2,482 |
kashif | 2024-12-15T09:31:55 | @hteague-qti so I wanted to get it working with this collator and then come back and make it more general after that.. so would you have a suggestion on what the next generalization could be? make it work for the SFT default collator?
| 2,481 |
hteague-qti | 2024-12-16T19:28:17 | I was thinking it could be made completely independent of the collator. First thing might be to warn users that even though they are providing a collator in the args, you are switching to a different one (for now).
Seems to me that trainer should not care about the data preprocessing or the collator, just the output logits, etc. Making it work with default collator in SFT would be fine. This one is quite common for language: DataCollatorForCompletionOnlyLM | 2,481 |
hteague-qti | 2024-12-19T21:39:17 | btw, appreciate the response. | 2,481 |
HuggingFaceDocBuilderDev | 2024-12-14T21:45:10 | The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Ftrl%2Fpr_2480). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,480 |
August-murr | 2024-12-14T18:57:56 | Before adding it to all the trainers, what do you think of the overall structure? Is it okay to include the tools in each trainer configuration? | 2,479 |
qgallouedec | 2024-12-14T19:05:11 | Thanks for this addition!
Let's keep things as separate as possible, and keep this PR for DPO only.
The code as is looks good to me. The only question is: can this type (`Optional[list[Union[dict, Callable]]]`) being parsed. I'll try.
| 2,479 |
qgallouedec | 2024-12-14T19:27:17 | That's why I thought:
```python
from trl import DPOConfig, TrlParser
parser = TrlParser((DPOConfig,))
parser.parse_args_and_config()
```
```
$ python 2479.py --output_dir out --tools "{'type': 'function', 'function': {'name': 'multiply', 'description': 'A function that multiplies two numbers', 'parameters': {'type': 'object', 'properties': {'a': {'type': 'number', 'description': 'The first number to multiply'}, 'b': {'type': 'number', 'description': 'The second number to multiply'}}, 'required': ['a', 'b']}}}"
[...]
2479.py: error: argument --tools: invalid Union value: "{'type': 'function', 'function': {'name': 'multiply', 'description': 'A function that multiplies two numbers', 'parameters': {'type': 'object', 'properties': {'a': {'type': 'number', 'description': 'The first number to multiply'}, 'b': {'type': 'number', 'description': 'The second number to multiply'}}, 'required': ['a', 'b']}}}"
```
I'm not sure what the best way to handle it right now, I'll sleep on it.
| 2,479 |
August-murr | 2024-12-15T08:51:45 | > Let's keep things as separate as possible, and keep this PR for DPO only.
a different PR for each trainer then?
> can this type `(Optional[list[Union[dict, Callable]]])` being parsed.
Adding tools to the CLI would be quite complicated. It wouldn't be practical to add all the tools into the CLI. My best guess is to read the functions from another source, like another script, if there’s a request for it later. | 2,479 |
August-murr | 2024-12-16T08:22:54 | does this need anything else? test or docs? | 2,479 |
August-murr | 2024-12-25T13:16:53 | I also wanted to add it to `SFTTrainer` but it doesn't use `maybe_apply_chat_template` | 2,479 |
HuggingFaceDocBuilderDev | 2024-12-13T20:46:51 | The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Ftrl%2Fpr_2476). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,476 |
HuggingFaceDocBuilderDev | 2024-12-13T19:02:02 | The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Ftrl%2Fpr_2475). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,475 |
HuggingFaceDocBuilderDev | 2024-12-13T17:43:29 | The docs for this PR live [here](/static-proxy?url=https%3A%2F%2Fmoon-ci-docs.huggingface.co%2Fdocs%2Ftrl%2Fpr_2474). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,474 |
asparius | 2024-12-14T00:28:52 | It utilizes `self.model`, which is defined in [[this line](https://github.com/huggingface/trl/blob/6d4ed070f1f53a87fb3cff2eb82a56db093bccc6/trl/trainer/rloo_trainer.py#L162)](https://github.com/huggingface/trl/blob/6d4ed070f1f53a87fb3cff2eb82a56db093bccc6/trl/trainer/rloo_trainer.py#L162). This approach is also adopted in `PPOTrainer`. I believe this is a deliberate nomenclature choice, designed to remain consistent across various preference learning frameworks without introducing the complexity of aligning with the diverse terminologies used in academic papers. | 2,472 |
qgallouedec | 2024-12-13T16:33:05 | Yes, that's a good point!
All datasets in [hf.co/trl-lib](https://huggingface.co/trl-lib) are taken from an original dataset. We should at least indicate this dataset in the readme with something like:
```
This dataset is a processed version of [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) with this [script](https://github.com/huggingface/trl/blob/main/examples/datasets/ultrafeedback.py).
```
To do this, we should add to all script in https://github.com/huggingface/trl/blob/main/examples/datasets a model card that we push, like in https://github.com/huggingface/trl/blob/179ba5367181d9bd4bdaec70d50789b09754d04a/scripts/generate_tiny_models.py#L69-L97
We could also add the type/format of dataset with a link to the relevant section in this page of the documentation: https://huggingface.co/docs/trl/en/dataset_formats | 2,470 |
qgallouedec | 2024-12-13T16:44:51 | What you're describing sounds closer to _padding-free_ than packing. We have a (currently draft) PR for this: #2437.
Can you confirm that's it is what you're describing?
---
At this point I'm not even sure that packing for DPO makes sense. How to ensure that you've as many chosen than rejected? How to ensure they match? How to handle partial sequences? | 2,469 |
zhc7 | 2024-12-13T17:16:15 | Hi, thank you for your response. I looked into the link you provided. I think we are talking about the same thing. I used the word "packing" from https://huggingface.co/blog/packing-with-FA2. The "packing" here actually means concatenating a fixed batch size of samples into one sequence, and use `position_ids` to mark the boundaries, rather than packing to a fixed length. So there won't be the problems you mentioned. I've also briefly read https://huggingface.co/blog/mayank-mishra/padding-free-transformer this blog, I think the ideas are the same. But I'm not sure how the latter is implemented. Maybe they are the same thing just with different names:)
I breifly went through the pr, I see it is trying to add `position_ids` in the whole process, so I guess we are talking about the same thing. | 2,469 |
qgallouedec | 2024-12-13T16:51:33 | That's a good point! Feel free to open a PR to fix this. I don't think adding a unittest for this is relevant. If possible, add plots (eg, with wandb) before/after to ensure that we aren't introducing a regression | 2,468 |
zhc7 | 2024-12-13T17:17:59 | Ofcourse!
![image](https://github.com/user-attachments/assets/2da93fdf-a29d-41a1-974a-2b640e3a6ee6)
here's a graph for the same training with and without the modification. You can see the pink line is a lot more smoother. Especially the accuracy graph. My `per_device_batch_size` is 2 so the accuracy per device can only be 1, 0.5 or 0. | 2,468 |
qgallouedec | 2024-12-13T17:34:35 | Perfect! | 2,468 |