`

VLM2Vec & MMEB

Benchmarking multimodal embeddings and adapting state-of-the-art multimodal large language models into embedding models.

VLM2Vec-V2: Advancing Multimodal Embedding for Videos, Images, and Visual Documents

University of Waterloo, Salesforce Research, UC Santa Barbara, Tsinghua University,

Abstract

Multimodal embedding models have been crucial in enabling various downstream tasks such as semantic similarity, information retrieval, and clustering over different modalities. However, existing multimodal embeddings like VLM2Vec, E5-V, GME are predominantly focused on natural images, with limited support for other visual forms such as videos and visual documents. This restricts their applicability in real-world scenarios, including AI agents, multi-modal search and recommendation, and retrieval-augmented generation (RAG). To close this gap, we propose VLM2Vec-V2, a unified framework for learning embeddings across diverse visual forms. First, we introduce MMEB-V2, a comprehensive benchmark that extends MMEB with five new task types: visual document retrieval, video retrieval, temporal grounding, video classification and video question answering – spanning text, image, video, and visual document inputs. Next, we train VLM2Vec-V2, a general-purpose embedding model that supports text, image, video, and visual document inputs. Extensive experiments show that VLM2Vec-V2 achieves strong performance not only on the newly introduced video and document retrieval tasks, but also improves over prior baselines on the original image benchmarks. Through extensive evaluation, our study offers insights into the generalizability of various multimodal embedding models and highlights effective strategies for unified embedding learning, laying the groundwork for more scalable and adaptable representation learning in both research and real-world settings.

MMEB-V2 Benchmark

An overview of MMEB-V2, which includes 9 meta-tasks and 78 tasks in total. In addition to the original MMEB benchmark, MMEB-V2 introduces five new meta-tasks focused on video and visual documents. Tasks from MMEB are indicated with blue borders, while newly introduced tasks in MMEB-V2 are marked with red borders.

MMEB_v2
MMEB

Reference

Please kindly cite our paper if you use our code, data, models or results:


@article{meng2025vlm2vecv2,
      title={VLM2Vec-V2: Advancing Multimodal Embedding for Videos, Images, and Visual Documents},
      author={Rui Meng and Ziyan Jiang and Ye Liu and Mingyi Su and Xinyi Yang and Yuepeng Fu and Can Qin and Zeyuan Chen and Ran Xu and Caiming Xiong and Yingbo Zhou and Wenhu Chen and Semih Yavuz},
      journal={arXiv preprint arXiv:2507.04590},
      year={2025}
      }
            

VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks

♠️University of Waterloo, Salesforce Research ziyanjiang528@gmail.com, ruimeng@salesforce.com, wenhuchen@uwaterloo.ca

Abstract

Embedding models have been crucial in enabling various downstream tasks such as semantic similarity, information retrieval, and clustering. Recently, there has been a surge of interest in developing universal text embedding models that can generalize across tasks (e.g., MTEB). However, progress in learning universal multimodal embedding models has been relatively slow despite their importance. In this work, we aim to explore the potential for building universal embeddings capable of handling a wide range of downstream tasks. Our contributions are twofold: (1) MMEB (Massive Multimodal Embedding Benchmark), which covers 4 meta-tasks including classification, question answering, retrieval, and visual grounding and 36 datasets, including 20 training and 16 evaluation datasets, and (2) VLM2Vec (Vision-Language Model -> Vector), a contrastive training framework that converts any state-of-the-art vision-language model into an embedding model via training on MMEB. Unlike previous models such as CLIP and BLIP, VLM2Vec can process any combination of images and text to generate a fixed-dimensional vector based on task instructions. We build a series of VLM2Vec models on SoTA VLMs like Phi-3.5-V, LLaVA-1.6 and evaluate them on MMEB's evaluation split. Our results show that VLM2Vec achieves an absolute average improvement of 10% to 20% over existing multimodal embedding models on both in-distribution and out-of-distribution datasets in MMEB. We show that VLMs are secretly strong embedding models.

VLM2Vec

Figure 1: We develop a universal multimodal embedding benchmark, MMEB, along with VLM2Vec, an embedding model adapted from vision-language models (VLMs). VLM2Vec is capable of following instructions and performing various multimodal embedding tasks, accommodating any combination of image and text modalities.

VLM2Vec

We propose VLM2Vec framework to learn a single multimodal embedding model that can encode a series of images and text for any downstream task. Unlike traditional CLIP or BLIP embeddings, VLM2Vec can handle images with any resolution and text with any length. It can also follow instruction to produce instruction-guided representation, which fits the downstream tasks better than other task-agnostic multimodal emebddings.

VLM2Vec

MMEB Benchmark

The model was trained with contrastive learning on a massive amount of examples we compiled from 36 datatsets spanning 4 tasks. We name this benchmark as MMEB, which has the train and eval splits separately. We hold out 15 datasets for out-of-distribution evaluation.

MMEB

MMEB Evaluation

We evaluated a wide range of multimodal embeddings on MMEB benchmarks. We show our results below. VLM2Vec outperforms all the baselines by a huge margin. The improvement on out-of-distribution evaluation demonstrates the generalization capability of VLM2Vec framework.

MMEB

Reference

Please kindly cite our paper if you use our code, data, models or results:


@article{jiang2024vlm2vec,
  title={VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks},
  author={Jiang, Ziyan and Meng, Rui and Yang, Xinyi and Yavuz, Semih and Zhou, Yingbo and Chen, Wenhu},
  journal={arXiv preprint arXiv:2410.05160},
  year={2024}
}