-
Recurrent Neural Network Regularization
Paper • 1409.2329 • Published -
Pointer Networks
Paper • 1506.03134 • Published -
Order Matters: Sequence to sequence for sets
Paper • 1511.06391 • Published -
GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism
Paper • 1811.06965 • Published
Collections
Discover the best community collections!
Collections including paper arxiv:1706.03762
-
Attention Is All You Need
Paper • 1706.03762 • Published • 50 -
LLaMA: Open and Efficient Foundation Language Models
Paper • 2302.13971 • Published • 13 -
Efficient Tool Use with Chain-of-Abstraction Reasoning
Paper • 2401.17464 • Published • 17 -
MoMa: Efficient Early-Fusion Pre-training with Mixture of Modality-Aware Experts
Paper • 2407.21770 • Published • 22
-
ReAct: Synergizing Reasoning and Acting in Language Models
Paper • 2210.03629 • Published • 16 -
Attention Is All You Need
Paper • 1706.03762 • Published • 50 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 16 -
Jamba: A Hybrid Transformer-Mamba Language Model
Paper • 2403.19887 • Published • 107
-
RoFormer: Enhanced Transformer with Rotary Position Embedding
Paper • 2104.09864 • Published • 11 -
Attention Is All You Need
Paper • 1706.03762 • Published • 50 -
Direct Nash Optimization: Teaching Language Models to Self-Improve with General Preferences
Paper • 2404.03715 • Published • 61 -
Zero-Shot Tokenizer Transfer
Paper • 2405.07883 • Published • 5
-
Attention Is All You Need
Paper • 1706.03762 • Published • 50 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 16 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 14 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 12
-
Long-form factuality in large language models
Paper • 2403.18802 • Published • 25 -
Attention Is All You Need
Paper • 1706.03762 • Published • 50 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 12 -
A Survey of GPT-3 Family Large Language Models Including ChatGPT and GPT-4
Paper • 2310.12321 • Published • 1
-
Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer
Paper • 1701.06538 • Published • 5 -
Attention Is All You Need
Paper • 1706.03762 • Published • 50 -
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
Paper • 2005.11401 • Published • 11 -
Language Model Evaluation Beyond Perplexity
Paper • 2106.00085 • Published