Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
License:
Dataset Viewer
Auto-converted to Parquet Duplicate
The dataset viewer is not available for this split.
Couldn't cast array of type list<element: struct<label: string, points: list<element: list<element: double>>>> to null
Error code:   UnexpectedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

X-AIGD

arXiv GitHub

X-AIGD is a fine-grained benchmark designed for eXplainable AI-Generated image Detection. It provides pixel-level human annotations of perceptual artifacts in AI-generated images, spanning low-level distortions, high-level semantics, and cognitive-level counterfactuals, aiming to advance robust and explainable AI-generated image detection methods.

For more details, please refer to our paper: Unveiling Perceptual Artifacts: A Fine-Grained Benchmark for Interpretable AI-Generated Image Detection.

🎨 Artifact Taxonomy

We define a comprehensive artifact taxonomy comprising 3 levels and 7 specific categories to capture the diverse range of perceptual artifacts in AI-generated images.

  • Low-level Distortions: low-level-edge_shape, low-level-texture, low-level-color, low-level-symbol.
  • High-level Semantics: high-level-semantics.
  • Cognitive-level Counterfactuals: cognitive-level-commonsense, cognitive-level-physics.

πŸš€ Dataset Contents

This repository currently hosts the pixel-level annotated subset of X-AIGD, which includes over 18,000 artifact instances across 3,000+ labeled samples, along with a large-scale unlabeled dataset.

Note on Dataset Status:

  • labeled_train, labeled_test, unlabeled_train, and unlabeled_test splits are currently available.
  • Real images are planned for upcoming release.

Data Fields

  • image: The AI-generated image (PNG or JPEG format).
  • generator: Name of the text-to-image generator.
  • uid: Unique identifier for the image.
  • labels: List of human-annotated artifacts, each containing:
    • label: Category of the artifact (e.g., low-level-edge_shape, high-level-semantics).
    • points: Polygon coordinates [[x1, y1], [x2, y2], ...] localizing the artifact.
  • original_prompt, positive_prompt, negative_prompt: Text prompts used for generation.
  • num_inference_steps, guidance_scale, seed, scheduler: Generation parameters.
  • width, height: Image resolution.
  • image_format, jpeg_quality, chroma_subsampling: Image compression details.

UID Correspondence

Each AI-generated (fake) image is generated based on the caption of a real image and inherits its uid from the corresponding real image metadata entry. This means the uid field in the fake splits matches the uid used across different generators, allowing direct pairing and comparison between images sharing the same semantic source.

πŸ“– Usage Example

from datasets import load_dataset

# Load the labeled test split (AI-generated images with artifact annotations)
ds = load_dataset("Coxy7/X-AIGD", split="labeled_test")

# Access an example
sample = ds[0]
print(f"Generator: {sample['generator']}")
print(f"UID: {sample['uid']}")

# Access artifact labels and polygon localization
for artifact in sample["labels"]:
    print(f"Artifact category: {artifact['label']}")
    print(f"Polygon points: {artifact['points']}")

# The image is a PIL object
# sample["image"].show()

πŸ“ Citation

If you find our work useful in your research, please consider citing:

@article{xiao2026unveiling,
  title={Unveiling Perceptual Artifacts: A Fine-Grained Benchmark for Interpretable AI-Generated Image Detection},
  author={Xiao, Yao and Chen, Weiyan and Chen, Jiahao and Cao, Zijie and Deng, Weijian and Yang, Binbin and Dong, Ziyi and Ji, Xiangyang and Ke, Wei and Wei, Pengxu and Lin, Liang},
  journal={arXiv preprint arXiv:2601.19430},
  year={2026}
}

πŸ“„ License

The dataset is released under the CC BY 4.0 license.

Downloads last month
54

Paper for Coxy7/X-AIGD