-
DataComp-LM: In search of the next generation of training sets for language models
Paper • 2406.11794 • Published • 50 -
Training Language Models on Synthetic Edit Sequences Improves Code Synthesis
Paper • 2410.02749 • Published • 12 -
Fewer Truncations Improve Language Modeling
Paper • 2404.10830 • Published • 3 -
How to Train Long-Context Language Models (Effectively)
Paper • 2410.02660 • Published • 2
Collections
Discover the best community collections!
Collections including paper arxiv:2403.07691
-
KTO: Model Alignment as Prospect Theoretic Optimization
Paper • 2402.01306 • Published • 16 -
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
Paper • 2305.18290 • Published • 50 -
SimPO: Simple Preference Optimization with a Reference-Free Reward
Paper • 2405.14734 • Published • 11 -
Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment
Paper • 2408.06266 • Published • 9
-
Iterative Reasoning Preference Optimization
Paper • 2404.19733 • Published • 47 -
Better & Faster Large Language Models via Multi-token Prediction
Paper • 2404.19737 • Published • 73 -
ORPO: Monolithic Preference Optimization without Reference Model
Paper • 2403.07691 • Published • 64 -
KAN: Kolmogorov-Arnold Networks
Paper • 2404.19756 • Published • 108
-
ORPO: Monolithic Preference Optimization without Reference Model
Paper • 2403.07691 • Published • 64 -
ResearchAgent: Iterative Research Idea Generation over Scientific Literature with Large Language Models
Paper • 2404.07738 • Published • 2 -
Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models
Paper • 2405.01535 • Published • 119