File size: 1,757 Bytes
00df77a
 
a4f72f7
 
 
 
 
 
 
00df77a
a4f72f7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---

license: cc-by-4.0
task_categories:
- visual-question-answering
language:
- en
tags:
- multimodal large language models
- face perception
---


# FaceBench Dataset
- **Paper:** https://ieeexplore.ieee.org/document/11092731
- **Repository:** https://github.com/CVI-SZU/FaceBench
- **Face-LLaVA:** https://huggingface.co/wxqlab/face-llava-v1.5-13b

## Dataset Summary

We release the FaceBench dataset, which consists of 49,919 visual question-answering (VQA) pairs for evaluation and 23,841 pairs for fine-tuning. 
FaceBench is built upon a hierarchical facial attribute structure, which encompasses five views with up to three levels of attributes, totaling over 210 attributes and 700 attribute values.

## Dataset Example
```json
{
    "question_id": "beard_q0", 
    "question_type": "TFQ", 
    "image_id": "test-CelebA-HQ-1279.jpg", 
    "text": "Does the person in the image have a beard?", 
    "instruction": "Please directly select the appropriate option from the given choices based on the image.", "options": ["Yes", "No", "Information not visible"], 
    "conditions": {"option Y": ["beard_q1", "beard_q2", "beard_q3", "beard_q4"], "option N": []}, 
    "gt_answer": "Yes", 
    "metadata": {"image_source": "CelebA-HQ", "view": "Appearance", "attribute_level": "level 1"}
}
```

## Citation
```
@inproceedings{wang2025facebench,
  title={FaceBench: A Multi-View Multi-Level Facial Attribute VQA Dataset for Benchmarking Face Perception MLLMs},
  author={Wang, Xiaoqin and Ma, Xusen and Hou, Xianxu and Ding, Meidan and Li, Yudong and Chen, Junliang and Chen, Wenting and Peng, Xiaoyang and Shen, Linlin},
  booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
  pages={9154--9164},
  year={2025}
}
```