File size: 7,710 Bytes
9a69a55
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9854853
b83b136
 
 
 
9854853
b83b136
 
9854853
 
 
b83b136
 
9854853
 
 
 
 
b83b136
 
 
9854853
 
b83b136
9854853
 
 
 
 
b83b136
 
 
 
9854853
 
 
 
 
b83b136
 
9854853
 
 
b83b136
 
 
 
9854853
 
 
b83b136
 
9854853
 
 
 
 
 
 
 
 
b83b136
 
 
9854853
 
b83b136
9854853
 
 
 
 
b83b136
 
 
 
9854853
 
 
 
 
 
b83b136
9854853
b83b136
9854853
 
b83b136
9854853
b83b136
 
 
9854853
 
 
b83b136
 
9854853
 
 
b83b136
 
9854853
 
 
 
 
b83b136
 
 
 
9854853
 
b83b136
 
9854853
b83b136
 
9854853
 
b83b136
 
9854853
b83b136
9854853
b83b136
9854853
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
---
dataset_info:
- config_name: hi
  features:
  - name: Article Title
    dtype: string
  - name: Entity Name
    dtype: string
  - name: Wikidata ID
    dtype: string
  - name: English Wikipedia Title
    dtype: string
  - name: Image Name
    dtype: image
  splits:
  - name: train
    num_bytes: 51118097.546
    num_examples: 1414
  download_size: 29882467
  dataset_size: 51118097.546
- config_name: id
  features:
  - name: Article Title
    dtype: string
  - name: Entity Name
    dtype: string
  - name: Wikidata ID
    dtype: string
  - name: English Wikipedia Title
    dtype: string
  - name: Image Name
    dtype: image
  splits:
  - name: train
    num_bytes: 52546850.192
    num_examples: 1428
  download_size: 32136412
  dataset_size: 52546850.192
- config_name: ja
  features:
  - name: Article Title
    dtype: string
  - name: Entity Name
    dtype: string
  - name: Wikidata ID
    dtype: string
  - name: English Wikipedia Title
    dtype: string
  - name: Image Name
    dtype: image
  splits:
  - name: train
    num_bytes: 62643647.72
    num_examples: 1720
  download_size: 35163853
  dataset_size: 62643647.72
- config_name: ta
  features:
  - name: Article Title
    dtype: string
  - name: Entity Name
    dtype: string
  - name: Wikidata ID
    dtype: string
  - name: English Wikipedia Title
    dtype: string
  - name: Image Name
    dtype: image
  splits:
  - name: train
    num_bytes: 44337774.542
    num_examples: 1254
  download_size: 30111872
  dataset_size: 44337774.542
- config_name: vi
  features:
  - name: Article Title
    dtype: string
  - name: Entity Name
    dtype: string
  - name: Wikidata ID
    dtype: string
  - name: English Wikipedia Title
    dtype: string
  - name: Image Name
    dtype: image
  splits:
  - name: train
    num_bytes: 46272154.251
    num_examples: 1343
  download_size: 27669139
  dataset_size: 46272154.251
configs:
- config_name: hi
  data_files:
  - split: train
    path: hi/train-*
- config_name: id
  data_files:
  - split: train
    path: id/train-*
- config_name: ja
  data_files:
  - split: train
    path: ja/train-*
- config_name: ta
  data_files:
  - split: train
    path: ta/train-*
- config_name: vi
  data_files:
  - split: train
    path: vi/train-*
---

# MERLIN Dataset Card

## Dataset Description

### Overview
MERLIN (Multilingual Entity Recognition and Linking) is a test dataset for evaluating multilingual entity linking systems with multimodal inputs. It consists of **BBC news article titles** in multiple languages, each paired with an associated image and entity annotations. The dataset contains **7,287 entity mentions** linked to **2,480 unique Wikidata entities**, covering a wide range of categories (persons, locations, organizations, events, etc.).

### Supported Tasks
- **Multimodal Entity Linking** – disambiguating entity mentions using both text and images.  
- **Cross-lingual Entity Linking** – linking mentions in one language to Wikidata entities regardless of language.  
- **Named Entity Recognition** – identifying entity mentions in non-English news titles.  

### Languages
- Hindi  
- Japanese  
- Indonesian  
- Vietnamese  
- Tamil

### Data Instances
Each instance in the dataset contains:

```json
{
  "Article_Title": "बिहार: केंद्रीय मंत्री अश्विनी चौबे के बेटे अर्जित 'गिरफ्तार'",
  "Entity_Name": "अश्विनी चौबे",
  "Wikidata_ID": "Q16728021",
  "English_Wikipedia_Title": "Ashwini Kumar Choubey",
  "Image_Name": "<GCS_URL>"
}
```

### Data Fields
- **Article_Title**: News article title in its original language (string)  
- **Entity_Name**: Entity mention in the same language (string)  
- **Wikidata_ID**: Wikidata identifier for the entity (string)  
- **English_Wikipedia_Title**: English Wikipedia page title (string)  
- **Image_Name**: Associated image filename/URL (string)  

### Data Splits
- The dataset contains **only a test split**, with **5,000 article titles** (1,000 per language).

---

## Dataset Creation

### Source Data
- Derived from the **M3LS dataset** (Verma et al., 2023), which was curated from **BBC News articles** spanning over a decade.  
- Articles include categories like politics, sports, economy, science, and technology.  
- Each article includes a **headline and an associated image**.

### Annotations
- **Tool used**: INCEpTION annotation platform.  
- **Knowledge base**: Wikidata.  
- **Process**:  
  - Annotators highlighted entity mentions in article titles and linked them to Wikidata entries.  
  - Each title was annotated by **three annotators**, with **majority voting** used for final selection.  
  - Annotators were recruited via **Prolific** with prescreening (required F1 ≥ 60% on English pilot tasks).  
- **Agreement**: Average inter-annotator Cohen’s Kappa ≈ **0.83** (almost perfect agreement).  

---

## Dataset Structure

### Example
```json
{
  "Article_Title": "बिहार: केंद्रीय मंत्री अश्विनी चौबे के बेटे अर्जित 'गिरफ्तार'",
  "Entity_Name": "अश्विनी चौबे",
  "Wikidata_ID": "Q16728021",
  "English_Wikipedia_Title": "Ashwini Kumar Choubey",
  "Image_Name": "<GCS_URL>"
}
```

### Data Statistics
- **Total article titles**: 5,000  
- **Total mentions**: 7,287  
- **Unique entities**: 2,480  
- **Languages covered**: Hindi, Japanese, Indonesian, Tamil, Vietnamese  
- **Avg. words per title**: ~11.1  
- **Unlinked mentions**: 1,243 (excluded from benchmark tasks)  

---

## Curation Rationale
MERLIN was created to provide the **first multilingual multimodal entity linking benchmark**, addressing the gap where existing datasets are either monolingual or text-only. It enables studying how images can resolve ambiguity in entity mentions, especially in **low-resource languages**.

---

## Considerations for Using the Data

### Social Impact
- Supports **fairer multilingual NLP research**, by including low-resource languages (Tamil, Vietnamese).  
- Encourages development of models robust to both text and images.  

### Discussion of Biases
- All data is from **BBC News**, limiting genre diversity.  
- Annotators’ **background knowledge** and **language proficiency** may introduce subtle biases.  
- **Wikidata coverage bias**: entities absent from Wikidata were excluded (≈17% of mentions unlinked).  

### Other Known Limitations
- Domain restriction (news only).  
- Focused on entity mentions in headlines, not longer text.  
- Baseline methods link to **Wikipedia titles** rather than pure **Wikidata QIDs**.  

---

## Additional Information

### Dataset Curators
- Carnegie Mellon University (CMU)  
- Defence Science and Technology Agency, Singapore  

### Licensing Information
- The dataset is released for **research purposes only**, under the license specified in the [GitHub repository](https://github.com/rsathya4802/merlin).  

### Citation Information
If you use MERLIN, cite:  
**Ramamoorthy, S., Shah, V., Khanuja, S., Sheikh, Z., Jie, S., Chia, A., Chua, S., & Neubig, G. (2025). MERLIN: A Testbed for Multilingual Multimodal Entity Recognition and Linking. Transactions of the Association for Computational Linguistics.**  

### Contributions
Community contributions can be made via the [MERLIN GitHub repo](https://github.com/rsathya4802/merlin).  

---

## Related Work
This dataset can be benchmarked with:  
- **mGENRE** (Multilingual Generative Entity Retrieval) [Repo](https://huggingface.co/facebook/mgenre-wiki)
- **GEMEL** (Generative Multimodal Entity Linking) [Repo](https://github.com/HITsz-TMG/GEMEL)