FaceBench / README.md
wxqlab's picture
Update README.md
a4f72f7 verified
metadata
license: cc-by-4.0
task_categories:
  - visual-question-answering
language:
  - en
tags:
  - multimodal large language models
  - face perception

FaceBench Dataset

Dataset Summary

We release the FaceBench dataset, which consists of 49,919 visual question-answering (VQA) pairs for evaluation and 23,841 pairs for fine-tuning. FaceBench is built upon a hierarchical facial attribute structure, which encompasses five views with up to three levels of attributes, totaling over 210 attributes and 700 attribute values.

Dataset Example

{
    "question_id": "beard_q0", 
    "question_type": "TFQ", 
    "image_id": "test-CelebA-HQ-1279.jpg", 
    "text": "Does the person in the image have a beard?", 
    "instruction": "Please directly select the appropriate option from the given choices based on the image.", "options": ["Yes", "No", "Information not visible"], 
    "conditions": {"option Y": ["beard_q1", "beard_q2", "beard_q3", "beard_q4"], "option N": []}, 
    "gt_answer": "Yes", 
    "metadata": {"image_source": "CelebA-HQ", "view": "Appearance", "attribute_level": "level 1"}
}

Citation

@inproceedings{wang2025facebench,
  title={FaceBench: A Multi-View Multi-Level Facial Attribute VQA Dataset for Benchmarking Face Perception MLLMs},
  author={Wang, Xiaoqin and Ma, Xusen and Hou, Xianxu and Ding, Meidan and Li, Yudong and Chen, Junliang and Chen, Wenting and Peng, Xiaoyang and Shen, Linlin},
  booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
  pages={9154--9164},
  year={2025}
}