OpenVLA: An Open-Source Vision-Language-Action Model Paper • 2406.09246 • Published Jun 13, 2024 • 36
OpenVLA: An Open-Source Vision-Language-Action Model Paper • 2406.09246 • Published Jun 13, 2024 • 36
OpenVLA: An Open-Source Vision-Language-Action Model Paper • 2406.09246 • Published Jun 13, 2024 • 36
Eliciting Compatible Demonstrations for Multi-Human Imitation Learning Paper • 2210.08073 • Published Oct 14, 2022
DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset Paper • 2403.12945 • Published Mar 19, 2024
Prismatic VLMs: Investigating the Design Space of Visually-Conditioned Language Models Paper • 2402.07865 • Published Feb 12, 2024 • 12
OBELICS: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents Paper • 2306.16527 • Published Jun 21, 2023 • 47
Accurately and Efficiently Interpreting Human-Robot Instructions of Varying Granularities Paper • 1704.06616 • Published Apr 21, 2017
A Tale of Two DRAGGNs: A Hybrid Approach for Interpreting Action-Oriented and Goal-Oriented Instructions Paper • 1707.08668 • Published Jul 26, 2017