Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
image
imagewidth (px)
3.84k
3.84k
End of preview. Expand in Data Studio

YAML Metadata Warning:The task_ids "visual-grounding" is not in the official list: acceptability-classification, entity-linking-classification, fact-checking, intent-classification, language-identification, multi-class-classification, multi-label-classification, multi-input-text-classification, natural-language-inference, semantic-similarity-classification, sentiment-classification, topic-classification, semantic-similarity-scoring, sentiment-scoring, sentiment-analysis, hate-speech-detection, text-scoring, named-entity-recognition, part-of-speech, parsing, lemmatization, word-sense-disambiguation, coreference-resolution, extractive-qa, open-domain-qa, closed-domain-qa, news-articles-summarization, news-articles-headline-generation, dialogue-modeling, dialogue-generation, conversational, language-modeling, text-simplification, explanation-generation, abstractive-qa, open-domain-abstractive-qa, closed-domain-qa, open-book-qa, closed-book-qa, text2text-generation, slot-filling, masked-language-modeling, keyword-spotting, speaker-identification, audio-intent-classification, audio-emotion-recognition, audio-language-identification, multi-label-image-classification, multi-class-image-classification, face-detection, vehicle-detection, instance-segmentation, semantic-segmentation, panoptic-segmentation, image-captioning, image-inpainting, image-colorization, super-resolution, grasping, task-planning, tabular-multi-class-classification, tabular-multi-label-classification, tabular-single-column-regression, rdf-to-text, multiple-choice-qa, multiple-choice-coreference-resolution, document-retrieval, utterance-retrieval, entity-linking-retrieval, fact-checking-retrieval, univariate-time-series-forecasting, multivariate-time-series-forecasting, visual-question-answering, document-question-answering, pose-estimation

YAML Metadata Warning:The task_ids "object-detection" is not in the official list: acceptability-classification, entity-linking-classification, fact-checking, intent-classification, language-identification, multi-class-classification, multi-label-classification, multi-input-text-classification, natural-language-inference, semantic-similarity-classification, sentiment-classification, topic-classification, semantic-similarity-scoring, sentiment-scoring, sentiment-analysis, hate-speech-detection, text-scoring, named-entity-recognition, part-of-speech, parsing, lemmatization, word-sense-disambiguation, coreference-resolution, extractive-qa, open-domain-qa, closed-domain-qa, news-articles-summarization, news-articles-headline-generation, dialogue-modeling, dialogue-generation, conversational, language-modeling, text-simplification, explanation-generation, abstractive-qa, open-domain-abstractive-qa, closed-domain-qa, open-book-qa, closed-book-qa, text2text-generation, slot-filling, masked-language-modeling, keyword-spotting, speaker-identification, audio-intent-classification, audio-emotion-recognition, audio-language-identification, multi-label-image-classification, multi-class-image-classification, face-detection, vehicle-detection, instance-segmentation, semantic-segmentation, panoptic-segmentation, image-captioning, image-inpainting, image-colorization, super-resolution, grasping, task-planning, tabular-multi-class-classification, tabular-multi-label-classification, tabular-single-column-regression, rdf-to-text, multiple-choice-qa, multiple-choice-coreference-resolution, document-retrieval, utterance-retrieval, entity-linking-retrieval, fact-checking-retrieval, univariate-time-series-forecasting, multivariate-time-series-forecasting, visual-question-answering, document-question-answering, pose-estimation

EgoXR-GUI: Benchmarking GUI Grounding in Physical–Digital Extended Reality

EgoXR-GUI is the first extended reality (XR) specific GUI grounding benchmark. Unlike traditional desktop or mobile GUI benchmarks, EgoXR-GUI evaluates whether multimodal large language models (MLLMs) can effectively reason about virtual interfaces embedded within hybrid digital–physical environments.

Overview

  • Dataset Size: 1,070 carefully curated examples. (Originally comprising more internal annotations, the final publicly released benchmark validates exactly 1,070 high-quality target grounding instructions across diverse spatial scenarios.)
  • Platform: Apple Vision Pro and other 3D/XR environments.
  • Task Types:
    1. Direct Grounding: Simple identification.
    2. Spatial Grounding: Reasoning about UI elements based on 3D spatial properties.
    3. Semantic Grounding: Reasoning based on the text or icon semantics of the UI elements.
  • Language Supported: English (instruction_en) and Chinese (instruction_cn).

Data Fields

Each Example contains the following fields:

  • task_id & annotation_id: Unique identifiers for tracking the specific visual task.
  • sample_id: External sample identifier linking back to the origin dataset source.
  • image: The egocentric view captured from the XR headset/environment.
  • instruction_en: The grounding prompt in English.
  • instruction_cn: The grounding prompt in Chinese.
  • gaze_point: The tracked eye gaze coordinate [x, y] representing the user's attention.
  • choices: Structured dictionary showing context tags:
    • is_same_window
    • ui_type
    • platform
    • scenario
    • place
    • activity
    • task type
  • target_bbox: The exact geometrical target. Contains x, y, width, height, spatial rotation, and string labels.
  • objects: Hugging Face standardized format representing the bounding box for Data Viewer Visualization.
  • is_ok: Quality control boolean indicator.
Downloads last month
142