-
Can Large Language Models Understand Context?
Paper • 2402.00858 • Published • 22 -
OLMo: Accelerating the Science of Language Models
Paper • 2402.00838 • Published • 82 -
Self-Rewarding Language Models
Paper • 2401.10020 • Published • 145 -
SemScore: Automated Evaluation of Instruction-Tuned LLMs based on Semantic Textual Similarity
Paper • 2401.17072 • Published • 25
Collections
Discover the best community collections!
Collections including paper arxiv:2403.15377
-
guoyww/animatediff-motion-lora-zoom-in
Text-to-Video • Updated • 40.9k • 7 -
guoyww/animatediff-motion-adapter-v1-5-2
Text-to-Video • Updated • 857 • 24 -
guoyww/animatediff-motion-adapter-v1-4
Text-to-Video • Updated • 31 • 5 -
InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding
Paper • 2403.15377 • Published • 22
-
PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning
Paper • 2404.16994 • Published • 35 -
VideoMamba: State Space Model for Efficient Video Understanding
Paper • 2403.06977 • Published • 27 -
VideoAgent: Long-form Video Understanding with Large Language Model as Agent
Paper • 2403.10517 • Published • 32 -
Video Mamba Suite: State Space Model as a Versatile Alternative for Video Understanding
Paper • 2403.09626 • Published • 13
-
InternVideo2: Scaling Video Foundation Models for Multimodal Video Understanding
Paper • 2403.15377 • Published • 22 -
OpenGVLab/InternVideo2-Chat-8B
Video-Text-to-Text • Updated • 391 • 21 -
OpenGVLab/InternVideo2_chat_8B_HD
Video-Text-to-Text • Updated • 809 • 17 -
OpenGVLab/InternVideo2_Chat_8B_InternLM2_5
Video-Text-to-Text • Updated • 438 • 7