LLaVA-MR: Large Language-and-Vision Assistant for Video Moment Retrieval [arXiv]
1Peking University, 2Tencent YouTu Lab, 3University at Albany, 4Zhejiang University
⚡If you have any questions, please contact [email protected]..
@article{lu2024llava,
title={LLaVA-MR: Large Language-and-Vision Assistant for Video Moment Retrieval},
author={Lu, Weiheng and Li, Jian and Yu, An and Chang, Ming-Ching and Ji, Shengpeng and Xia, Min},
journal={arXiv preprint arXiv:2411.14505},
year={2024}
}
Multimodal Large Language Models (MLLMs) are widely used for visual perception, understanding, and reasoning. However, long video processing and precise moment retrieval remain challenging due to LLMs’ limited context size and coarse frame extraction. We propose the Large Language-and-Vision Assistant for Moment Retrieval (LLaVA-MR), which enables accurate moment retrieval and contextual grounding in videos using MLLMs. LLaVA-MR combines Dense Frame and Time Encoding (DFTE) for spatial-temporal feature extraction, Informative Frame Selection (IFS) for capturing brief visual and motion patterns, and Dynamic Token Compression (DTC) to manage LLM context limitations. Evaluations on benchmarks like Charades-STA and QVHighlights demonstrate that LLaVA-MR outperforms 11 state-of-the-art methods, achieving an improvement of 1.82% in [email protected] and 1.29% in [email protected] on the QVHighlights dataset. Our implementation will be open-sourced upon acceptance.