File size: 3,654 Bytes
aab8642
 
 
 
 
 
 
 
 
 
 
 
 
 
 
b6d96f2
aab8642
b6d96f2
aab8642
b6d96f2
aab8642
b6d96f2
aab8642
 
 
b6d96f2
aab8642
b6d96f2
aab8642
 
 
 
b6d96f2
aab8642
 
 
 
b6d96f2
aab8642
b6d96f2
aab8642
 
 
b6d96f2
aab8642
b6d96f2
aab8642
b6d96f2
aab8642
b6d96f2
aab8642
b6d96f2
aab8642
b6d96f2
aab8642
 
 
 
 
b6d96f2
aab8642
b6d96f2
aab8642
b6d96f2
aab8642
b6d96f2
aab8642
 
b6d96f2
aab8642
b6d96f2
aab8642
 
 
 
 
 
 
b6d96f2
aab8642
 
 
b6d96f2
aab8642
b6d96f2
aab8642
b6d96f2
aab8642
 
b6d96f2
aab8642
 
b6d96f2
aab8642
 
b6d96f2
 
aab8642
b6d96f2
aab8642
b6d96f2
aab8642
 
b6d96f2
aab8642
 
b6d96f2
aab8642
 
 
b6d96f2
 
aab8642
b6d96f2
aab8642
 
 
b6d96f2
aab8642
 
b6d96f2
aab8642
 
b6d96f2
 
aab8642
b6d96f2
aab8642
 
 
 
b6d96f2
aab8642
b6d96f2
aab8642
 
 
 
b6d96f2
aab8642
b6d96f2
 
aab8642
 
 
62ec054
aab8642
b6d96f2
 
 
aab8642
 
 
b6d96f2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
---
license: mit
task_categories:
- visual-question-answering
- text-to-image
language:
- en
tags:
- vision-language
- vqa
- multimodal
- question-answering
size_categories:
- n<1K
---

# SimpleVQA Dataset

SimpleVQA is a simple vision-language question-answering dataset designed for testing and reproducing vision-language model training. It contains 128 samples with images and question-answer pairs in a conversational format.

## Dataset Description

- **Repository**: [JosephFace/simpleVQA](https://huggingface.co/datasets/JosephFace/simpleVQA)
- **Paper**: N/A
- **Point of Contact**: N/A

### Dataset Summary

SimpleVQA is a lightweight dataset containing 128 vision-language question-answering samples. Each sample includes:
- An image (512x512 RGB)
- A conversation with user questions and assistant answers
- Image paths for reference

This dataset is suitable for:
- Testing vision-language model training pipelines
- Reproducing experimental results
- Educational purposes and quick prototyping

### Supported Tasks

- **Visual Question Answering (VQA)**: Answer questions about image content
- **Image Description**: Generate descriptions of image content
- **Multimodal Conversation**: Engage in conversations about images

### Languages

The dataset is primarily in English.

## Dataset Structure

### Data Fields

Each sample contains the following fields:

- **messages**: List of conversation turns
  - `role`: "user" or "assistant"
  - `content`: Text content of the message
- **image**: PIL Image object (RGB format, 512x512)
- **image_path**: Original image file path

### Data Splits

- **train**: 128 samples

### Example

```python
from datasets import load_dataset

dataset = load_dataset("JosephFace/simpleVQA")

# Access a sample
sample = dataset["train"][0]
print(sample["messages"])
# [
#   {"role": "user", "content": "What is shown in this image?"},
#   {"role": "assistant", "content": "This is sample image 0 from the SimpleVQA dataset."}
# ]

print(sample["image"])  # PIL Image object
print(sample["image_path"])  # "images/image_00000.jpg"
```

## Usage

### Load from HuggingFace Hub

```python
from datasets import load_dataset

# Load the dataset
dataset = load_dataset("JosephFace/simpleVQA")

# Or load specific split
train_dataset = load_dataset("JosephFace/simpleVQA", split="train")
```

### Load Locally

If you have the dataset files locally:

```python
from datasets import load_from_disk

# Load from Arrow format
dataset = load_from_disk("path/to/hf_dataset")

# Or load from JSONL
from datasets import load_dataset
dataset = load_dataset("json", data_files="simpleVQA_128.jsonl", split="train")
```

### Use with Training Pipeline

```python
from datasets import load_dataset
from veomni.data.dataset import MappingDataset

# Load dataset
hf_dataset = load_dataset("JosephFace/simpleVQA", split="train")

# Use with VeOmni training pipeline
dataset = MappingDataset(data=hf_dataset, transform=your_transform_function)
```

## Dataset Statistics

- **Total samples**: 128
- **Image format**: JPEG, 512x512 RGB
- **Average conversation turns**: 2 (1 user question + 1 assistant answer)
- **Total images**: 128

## Limitations

- Small dataset size (128 samples) - suitable for testing only
- Synthetic/placeholder images - not real-world data
- Limited question diversity
- Primarily English language content

## Citation

```bibtex
@dataset{josephface_simplevqa,
  title={SimpleVQA: A Simple Vision-Language Question-Answering Dataset},
  author={JosephFace},
  year={2025},
  url={https://huggingface.co/datasets/JosephFace/simpleVQA}
}
```

## License

This dataset is released under the MIT License.