Spaces:
Running
Running
| title: OpenS2V Eval | |
| emoji: π | |
| colorFrom: gray | |
| colorTo: blue | |
| sdk: gradio | |
| sdk_version: 5.49.1 | |
| app_file: app.py | |
| pinned: false | |
| license: apache-2.0 | |
| short_description: A Detailed Benchmark for Subject-to-Video Generation | |
| thumbnail: >- | |
| https://cdn-uploads.huggingface.co/production/uploads/63468720dd6d90d82ccf3450/N9kKR052363-MYkJkmD2V.png | |
| <div align=center> | |
| <img src="https://github.com/PKU-YuanGroup/OpenS2V-Nexus/blob/main/__assets__/OpenS2V-Nexus_logo.png?raw=true" width="300px"> | |
| </div> | |
| <h2 align="center"> <a href="https://pku-yuangroup.github.io/OpenS2V-Nexus/">OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation</a></h2> | |
| <h5 align="center"> If you like our project, please give us a star β on GitHub for the latest update. </h2> | |
| ## β¨ Summary | |
| **OpenS2V-Eval** introduces 180 prompts from seven major categories of S2V, which incorporate both real and synthetic test data. Furthermore, | |
| to accurately align human preferences with S2V benchmarks, we propose three automatic metrics: **NexusScore**, **NaturalScore**, **GmeScore** | |
| to separately quantify subject consistency, naturalness, and text relevance in generated videos. Building on this, we conduct a comprehensive | |
| evaluation of 18 representative S2V models, highlighting their strengths and weaknesses across different content. | |
| ## π£ Evaluate Your Own Models | |
| For how to evaluate your customized model like OpenS2V-Eval in the [OpenS2V-Nexus paper](https://huggingface.co/papers/2505.20292), please refer to [here](https://github.com/PKU-YuanGroup/OpenS2V-Nexus/tree/main/eval). | |
| ## βοΈ Get Videos Generated by Different S2V models | |
| For more details, please refer to [here](https://huggingface.co/datasets/BestWishYsh/OpenS2V-Eval/tree/main/Results). | |
| ## π‘ Description | |
| - **Repository:** [Code](https://github.com/PKU-YuanGroup/OpenS2V-Nexus), [Page](https://pku-yuangroup.github.io/OpenS2V-Nexus/), [Dataset](https://huggingface.co/datasets/BestWishYsh/OpenS2V-5M), [Benchmark](https://huggingface.co/datasets/BestWishYsh/OpenS2V-Eval) | |
| - **Paper:** [https://huggingface.co/papers/2505.20292](https://huggingface.co/papers/2505.20292) | |
| - **Point of Contact:** [Shenghai Yuan](shyuan-cs@hotmail.com) | |
| ## βοΈ Citation | |
| If you find our paper and code useful in your research, please consider giving a star and citation. | |
| ```BibTeX | |
| @article{yuan2025opens2v, | |
| title={OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation}, | |
| author={Yuan, Shenghai and He, Xianyi and Deng, Yufan and Ye, Yang and Huang, Jinfa and Lin, Bin and Luo, Jiebo and Yuan, Li}, | |
| journal={arXiv preprint arXiv:2505.20292}, | |
| year={2025} | |
| } | |
| ``` |