VideoScore

Building Automatic Metrics to Simulate Fine-grained Human Feedback for Video Generation


1,2,† Xuan He*, 1,† Dongfu Jiang*, 1,3 Ge Zhang, 1 Max Ku,
1Achint Soni, 1Sherman Siu, 1Haonan Chen, 1Abhranil Chandra, 1Ziyan Jiang, 1Aaran Arulraj, 4Kai Wang, 1Quy Duc Do, 1Yuansheng Ni, 2Bohan Lyu, 1Yaswanth Narsupalli, 1Rongqi Fan, 1Zhiheng Lyu, 5Bill Yuchen Lin, 1,† Wenhu Chen

1University of Waterloo, 2Tsinghua University, 3StarDust.AI, 4University of Toronto, 5AI2

*Equal Contribution

Abstract

The recent years have witnessed great advances in video generation. However, the development of automatic video metrics is lagging significantly behind. None of the existing metric is able to provide reliable scores over generated videos. The main barrier is the lack of large-scale human-annotated dataset.

  1. VideoFeedback Dataset. In this paper, we release VideoFeedback, the first large-scale dataset containing human-provided multiaspect score over 37.6K synthesized videos from 11 existing video generative models.
  2. VideoScore. We train VideoScore (initialized from Mantis) based on VideoFeedback to enable automatic video quality assessment. Experiments show that the Spearman correlation between VideoScore and humans can reach 77.1 on VideoFeedback-test, beating the prior best metrics by about 50 points. Further result on other held-out EvalCrafter, GenAI-Bench, and VBench show that VideoScore has consistently much higher correlation with human judges than other metrics.
  3. Human Feedback for Video generative models. Due to these results, we believe VideoScore can serve as a great proxy for human raters to (1) rate different video models to track progress (2) simulate fine-grained human feedback in Reinforcement Learning with Human Feedback (RLHF) to improve current video generation models.

VideoFeedback Dataset
Multi-Aspect Human-Annotated Video Evaluation Data

VideoFeedback contains a total of 37.6K text-to-video pairs from 11 popular video generative models, with some real-world videos as data augmentation. The videos are annotated by raters for five evaluation dimensions: Visual Quality (VQ), Temporal Consistency (TC), (DD) Dynamic Degree (DD), Text-to-Video Alignment (TVA) and Factual Consistency (FC), in 1-4 scoring scale. Below we show the detailed description of our VideoFeedback dataset. Please check out 🤗 VideoFeedback on hugging face datasets for usage.

Statistics

Dimensions of Evaluation

Annotation Examples

1-Bad, 2-Average, 3-Good, 4-Real/Perfect

GIF 1

prompt: completely base your choice of which one to visit today on the dish that most entices your taste buds, 1080P, high quality, comic

VQTCDDTVAFC
33133
GIF 1

prompt: an African American female video editor editing videos

VQTCDDTVAFC
11331
GIF 1

prompt: Cinematic, A light rain is falling. Tea pickers are picking tea in a tea garden, 4K, anime style

VQTCDDTVAFC
32331
GIF 1

prompt: crypto new year Christmas santa money dollars pack

VQTCDDTVAFC
12331
GIF 1

prompt: Woman receiving a rose and blushing with a smile

VQTCDDTVAFC
22332
GIF 1

prompt: panorama gold coast city in future as a dystopian prison

VQTCDDTVAFC
23323
GIF 1

prompt: little bear looks surprised as the moon gets smaller

VQTCDDTVAFC
12312
GIF 1

prompt: alexandra daddario, upperbody focus, slow motion, cinematic

VQTCDDTVAFC
22331
GIF 1

prompt: cinematic portrait of two dogs running away from a medieval man

VQTCDDTVAFC
12321
GIF 1

prompt: a skateboard on the bottom of a surfboard, front view

VQTCDDTVAFC
33332
GIF 1

prompt: yellow van with trailer starts to back up

VQTCDDTVAFC
44444
GIF 1

prompt: five gray wolf pups frolicking and chasing each other around a remote gravel road, surrounded by grass. The pups run and leap, chasing each other, and nipping at each other, playing

VQTCDDTVAFC
42424

VideoScore

VideoScore is finetuned on VideoFeedback dataset's 37K training set taking Mantis-8B-Idefics2 as base model. We try generation scoring method and regression scoring method, the former one means model's answer is in a template predefined for video quality evaluation while the latter one outputs 5 logits as evaluation scores in 5 dimensions. Besides, we also make ablation on base model, using Mantis-8B-Idefics2, Idefics2-8B and VideoLLaVA-7B as base models to finetune. Mantis-8B-Idefics2 turns out to have the best performance on video quality evaluation.

Video Evaluation Benchmarks

VideoFeedback-test

We test VideoScore on VideoFeedback-test set, containing 760 videos with human scores from five dimensions. We take the Spearman correlation between VideoScore and human annotation as performance indicator. Below we show the results of some feature-based metrics like PIQE, CLIP-sim, X-CILIP-Sore etc, and some MLLM-prompting methods like GPT-4o Gemini-1.5-Pro, etc and our VideoScore.


EvalCrafter Benchmark

We select 3 dimensions (Visual Quality, Temporal Consistency and Text-to-Video Alignment) from EvalCrafter that match our evaluation aspects and collect 2500+ videos for test. We take the Spearman correlation between VideoScore and human annotation as performance indicator.


⚔️GenAI-Bench and VBench

GenAI-Bench is a multimodal benchmark for MLLM's capability on preference comparison for tasks like text-to-video generation, image-editing and others, while VBench is a comprehensive multi-aspect benchmark suite for video generative models.
For GenAI-Bench we collect 2100+ videos in test and for VBench we select a subset from 5 aspects of VBench, like technical quality, subject consistency etc, then subsample 100 unique prompts for four T2V models (2000 videos totally) for test. We use averaged score of our five dimensions for MLLM prompting baselines and VideoScore to give the preference and calculate the pairwise accuracy as performance indicator.

Results & Leaderboard

VideoScore series MLLM Prompting Method Feature-Based Metric

Metric Final Avg Score ↓ VideoFeedback-test EvalCrafter GenAI-Bench VBench
VideoScore (reg)69.675.751.178.573.0
VideoScore-(gen)55.677.127.659.058.7
Gemini-1.5-Pro39.722.122.960.952.9
Gemini-1.5-Flash39.420.817.367.152.3
GPT-4o38.923.128.752.051.7
CLIP-sim31.78.936.234.247.4
DINO-sim30.37.532.138.543.3
SSIM-sim29.513.426.934.143.5
CLIP-Score28.6-7.221.745.054.9
LLaVA-1.5-7B27.18.510.549.939.4
LLaVA-1.6-7B23.3-3.113.244.538.7
X-CLIP-Score23.2-1.913.341.440.1
PIQE19.6-10.1-1.234.555.1
BRISQUE19.0-20.33.938.553.7
Idefics118.36.50.334.631.7
MSE-dyn10.6-5.5-17.028.436.5
SSIM-dyn9.2-12.9-26.431.444.5

The best VideoScore is in bold and the best in baselines is underlined.

Case Studies

VideoFeedback-test

Scale of all the scores is in [1, 2, 3, 4] except for VideoScore (reg), which outputs five float logits ranging from 0.50 to 4.50.
For scale [1, 2, 3, 4], 1-Bad, 2-Avg, 3-Good, 4-Perfect/Real.

GIF

prompt: A robot that throws a stack of paper from a desk

MethodVQTCDDTVAFC MethodVQTCDDTVAFC
Human score31331
VideoScore (reg)2.670.813.092.500.80 VideoScore (gen)31331
GPT-4o34234 Gemini-1.5-Pro31133
Gemini-1.5-Flash31133 LLaVA-1.6-7B33333
LLaVA-1.5-7B33332 Idefics144312
PIQE11111 DINO-sim11111
SSIM-dyn33333 CLIP-Score22222
GIF

prompt: Illustrate a bustling market scene, with fresh produce displayed on stalls, attracting villagers eager to purchase, cartoon style

MethodVQTCDDTVAFC MethodVQTCDDTVAFC
Human score12322
VideoScore (reg)1.911.862.842.441.67 VideoScore (gen)21311
GPT-4o33344 Gemini-1.5-Pro22133
Gemini-1.5-Flash31123 LLaVA-1.6-7B33333
LLaVA-1.5-7B33322 Idefics144312
PIQE22222 DINO-sim44444
SSIM-dyn22222 CLIP-Score33333
GIF

prompt: Every day must be Sunday Amusement park inside the school

MethodVQTCDDTVAFC MethodVQTCDDTVAFC
Human score11321
VideoScore (reg)1.041.422.951.971.09 VideoScore (gen)11321
GPT-4o34233 Gemini-1.5-Pro21221
Gemini-1.5-Flash21121 LLaVA-1.6-7B33322
LLaVA-1.5-7B33322 Idefics144312
PIQE11111 DINO-sim33333
SSIM-dyn44444 CLIP-Score22222

⚔️GenAI-Bench

In each item we have two videos with same prompt and a human preference annotation. For VideoScore and MLLM prompting methods, we use average score of all 5 dimensions to predict preference, while for feature-based metrics, we use their discretized output for the prediction of preference direcly.

GIF

Left Video
prompt: a cute dog is playing a ball

GIF

Right Video
prompt: a cute dog is playing a ball

GIF
GIF

Left Video
prompt: An astronaut flying in space, oil painting

GIF

Right Video
prompt: An astronaut flying in space, oil painting

GIF

BibTeX

@article{he2024videoscore,
  title = {VideoScore: Building Automatic Metrics to Simulate Fine-grained Human Feedback for Video Generation},
  author = {He, Xuan and Jiang, Dongfu and Zhang, Ge and Ku, Max and Soni, Achint and Siu, Sherman and Chen, Haonan and Chandra, Abhranil and Jiang, Ziyan and Arulraj, Aaran and Wang, Kai and Do, Quy Duc and Ni, Yuansheng and Lyu, Bohan and Narsupalli, Yaswanth and Fan, Rongqi and Lyu, Zhiheng and Lin, Yuchen and Chen, Wenhu},
  journal = {ArXiv},
  year = {2024},
  volume={abs/2406.15252},
  url = {https://arxiv.org/abs/2406.15252},
}