| --- |
| language: |
| - zh |
| license: apache-2.0 |
| size_categories: |
| - 1K<n<10K |
| task_categories: |
| - text-retrieval |
| tags: |
| - text |
| - retrieval |
| configs: |
| - config_name: passages |
| data_files: |
| - split: test |
| path: passages/test* |
| - config_name: queries |
| data_files: |
| - split: test |
| path: queries/test* |
| --- |
| |
| The dataset **CapRetrieval** introduced in the EMNLP 2025 Finding paper: [[Dense Retrievers Can Fail on Simple Queries: Revealing The Granularity Dilemma of Embeddings](https://arxiv.org/abs/2506.08592)]. |
|
|
| **CapRetrieval** is in Chinese; the according English version is available at [CapRetrievalEn](https://huggingface.co/datasets/lxucs/CapRetrievalEn), sharing the same queries, passages and labels. |
|
|
| ### Introduction |
|
|
| CapRetrieval evaluates the fine-grained embedding matching (dense passage retrieval), tailored towards a practical image search scenario: |
| - Candidate passages are image captions, and queries are short phrases of entities or events reflected in captions. |
| - Overall, the dataset comprises seemingly simple queries and captions; however, text encoders are shown limitations resolving these cases. |
| - Evaluation results call for attention on embedding training strategies with different **granularity**. |
|
|
| ### Format |
|
|
| CapRetrieval follows the same retrieval task format as in MTEB, with relevance labels in [0,1,2] for each pair. |
| Note that unlike prior datasets, we annotate full labels for each query-passage pair (1.3 million pairs), minimizing false negatives for more accurate evaluation. |
|
|
| A small amount of queries do not have any relevant captions; they are excluded in computation of retrieval metrics (e.g. nDCG), but can be useful for other analysis, e.g. in classification setting. |
|
|
| ### Evaluation |
|
|
| Please see the evaluation script and results at https://github.com/lxucs/CapRetrieval. |
|
|
| | Type | Model | nDCG@10 | |
| |----------|-------------------------|-----------| |
| | **BM25** | Basic BM25 | 66.54 | |
| | **0.1B** | bge-base-zh-v1.5 | 78.86 | |
| | | gte-multilingual-base | 79.67 | |
| | | multilingual-e5-base | 76.33 | |
| | **0.3B** | bge-large-zh-v1.5 | 79.15 | |
| | | multilingual-e5-large | 81.01 | |
| | | Conan-embedding-v1 | 77.04 | |
| | **0.6B** | Qwen3-Embedding-0.6B | 81.04 | |
| | **>1B** | gte-Qwen2-1.5B-instruct | 77.35 | |
| | | gte-Qwen2-7B-instruct | **86.55** | |
| | | e5-mistral-7b-instruct | 76.40 | |
| | | Qwen3-Embedding-8B | 84.61 | |
| | | | | |
| | Trained | Out-of-Domain | 87.23 | |
| | | In-Domain | 91.83 | |
|
|
|
|
| The trained models (based on `bge-base-zh-v1.5`) are trained with queries by our data generation strategies described in the paper. The in-domain model can be downloaded from [Google Drive](https://drive.google.com/drive/folders/1l2pvELMQPKjhAasNGaY7d14jMK0iCRhj). |
|
|
|
|
| ### Citation |
|
|
| ```bibtex |
| @inproceedings{xu-etal-2025-dense, |
| title = "Dense Retrievers Can Fail on Simple Queries: Revealing The Granularity Dilemma of Embeddings", |
| author = "Xu, Liyan and Su, Zhenlin and Yu, Mo and Li, Jiangnan and Meng, Fandong and Zhou, Jie", |
| booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2025", |
| month = nov, |
| year = "2025", |
| address = "Suzhou, China", |
| publisher = "Association for Computational Linguistics" |
| } |
| ``` |