Collections

Discover the best community collections!

Collections including paper arxiv:2403.07691
Training
Collection by Nov 22, 2024
alignment_24_best
Collection by Oct 21, 2024
Zephyr ORPO
Models and datasets to align LLMs with Odds Ratio Preference Optimisation (ORPO). Recipes here: https://github.com/huggingface/alignment-handbook