leonardlin
's Collections
reasoning
updated
Can Large Language Models Understand Context?
Paper
•
2402.00858
•
Published
•
22
Efficient Tool Use with Chain-of-Abstraction Reasoning
Paper
•
2401.17464
•
Published
•
17
ReFT: Reasoning with Reinforced Fine-Tuning
Paper
•
2401.08967
•
Published
•
29
The Impact of Reasoning Step Length on Large Language Models
Paper
•
2401.04925
•
Published
•
16
Chain-of-Table: Evolving Tables in the Reasoning Chain for Table
Understanding
Paper
•
2401.04398
•
Published
•
21
Self-Discover: Large Language Models Self-Compose Reasoning Structures
Paper
•
2402.03620
•
Published
•
114
Bias Runs Deep: Implicit Reasoning Biases in Persona-Assigned LLMs
Paper
•
2311.04892
•
Published
•
1
More Agents Is All You Need
Paper
•
2402.05120
•
Published
•
51
Grandmaster-Level Chess Without Search
Paper
•
2402.04494
•
Published
•
67
The Benefits of a Concise Chain of Thought on Problem-Solving in Large
Language Models
Paper
•
2401.05618
•
Published
•
1
Divide-or-Conquer? Which Part Should You Distill Your LLM?
Paper
•
2402.15000
•
Published
•
22
System 2 Attention (is something you might need too)
Paper
•
2311.11829
•
Published
•
39
Quiet-STaR: Language Models Can Teach Themselves to Think Before
Speaking
Paper
•
2403.09629
•
Published
•
75
On the Conversational Persuasiveness of Large Language Models: A
Randomized Controlled Trial
Paper
•
2403.14380
•
Published
•
1
Orca-Math: Unlocking the potential of SLMs in Grade School Math
Paper
•
2402.14830
•
Published
•
24
Language Models as Compilers: Simulating Pseudocode Execution Improves
Algorithmic Reasoning in Language Models
Paper
•
2404.02575
•
Published
•
48
Compression Represents Intelligence Linearly
Paper
•
2404.09937
•
Published
•
27
Democratizing Reasoning Ability: Tailored Learning from Large Language
Model
Paper
•
2310.13332
•
Published
•
14
DeepSeek-Prover: Advancing Theorem Proving in LLMs through Large-Scale
Synthetic Data
Paper
•
2405.14333
•
Published
•
37
Large Language Models as Planning Domain Generators
Paper
•
2405.06650
•
Published
•
9
ALPINE: Unveiling the Planning Capability of Autoregressive Learning in
Language Models
Paper
•
2405.09220
•
Published
•
24
On the Brittle Foundations of ReAct Prompting for Agentic Large Language
Models
Paper
•
2405.13966
•
Published
Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to
the Edge of Generalization
Paper
•
2405.15071
•
Published
•
37
Accessing GPT-4 level Mathematical Olympiad Solutions via Monte Carlo
Tree Self-refine with LLaMa-3 8B
Paper
•
2406.07394
•
Published
•
25
Mixture-of-Agents Enhances Large Language Model Capabilities
Paper
•
2406.04692
•
Published
•
55
Your Context Is Not an Array: Unveiling Random Access Limitations in
Transformers
Paper
•
2408.05506
•
Published
•
8
Mutual Reasoning Makes Smaller LLMs Stronger Problem-Solvers
Paper
•
2408.06195
•
Published
•
63