File size: 10,115 Bytes
35a46b4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39f5522
35a46b4
 
 
 
 
 
 
 
 
 
 
 
e380fa1
35a46b4
 
e380fa1
35a46b4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39f5522
35a46b4
 
 
 
 
3867270
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35a46b4
 
 
3867270
35a46b4
 
39f5522
35a46b4
 
 
 
 
9efa838
35a46b4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
---
license: cc-by-nc-4.0
task_categories:
- text-classification
- token-classification
language:
- sa
- bo
pretty_name: DharmaBench
---
# Dataset Card for DharmaBench

<!-- Provide a quick summary of the dataset. -->

## Dataset Details

### Dataset Description

<!-- Provide a longer summary of what this dataset is. -->

**DharmaBench** is a multi-task benchmark suite for evaluating large language models (LLMs) on **classification** and **detection** tasks in historical Buddhist texts written in **Sanskrit** and **Classical Tibetan**.  
It contains **13 tasks** (6 Sanskrit, 7 Tibetan), with **4 tasks shared across both languages**, designed to measure linguistic, cultural, and structural understanding in low-resource, ancient-language contexts.

The benchmark includes tasks such as metaphor and simile detection, quotation detection, verse/prose classification, metre classification, and root-text/commentary alignment. These reflect key challenges faced by philologists, historians of philosophy and religion, and digital humanities researchers studying Buddhist textual traditions.
For the exact definition and description of the tasks, please see the repository or the paper.

- **Curated by:** Intellexus Project (Kai Golan Hashiloni et al.)
- **Funded by:** This study is supported in part by the European Research Council (Intellexus, Project No.101118558).
- **Shared by:** Intellexus Project
- **Language(s):** Sanskrit (`sa`), Classical Tibetan (`bo`)
- **License:** CC BY 4.0

### Dataset Sources 

<!-- Provide the basic links for the dataset. -->

- **Repository:** [https://github.com/Intellexus-DSI/DharmaBench](https://github.com/Intellexus-DSI/DharmaBench)
- **Paper:** [DharmaBench: Evaluating Language Models on Buddhist Texts in Sanskrit and Tibetan](https://aclanthology.org/2025.ijcnlp-long.114/)
- **Demo:** Not available


## Uses

<!-- Address questions around how the dataset is intended to be used. -->

### Direct Use

<!-- This section describes suitable use cases for the dataset. -->

DharmaBench can be used to:
- Evaluate multilingual or low-resource LLMs on culturally and linguistically rich ancient-language data.  
- Benchmark Sanskrit and Classical Tibetan performance across a variety of classification and detection tasks.  
- Support philologists and digital humanists in semi-automating annotation, quotation tracing, or commentary alignment.

### Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->

- None

## Dataset Structure

<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->

- Each task is located under either `Sanskrit/` or `Tibetan/`, with files such as `train.json` and `test.json`, based on availability.
- Each task has a slightly different structure and column.
- All data is standardized and formatted for text- and token-level tasks.
  
## Dataset Creation

### Curation Rationale

<!-- Motivation for the creation of this dataset. -->

The dataset was created to enable **systematic benchmarking of LLMs on Sanskrit and Classical Tibetan**, languages central to Buddhist textual transmission yet underrepresented in NLP.
It supports evaluation of linguistic understanding, structural analysis, and cultural reasoning.

### Source Data

<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->

#### Data Collection and Processing

<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->

Texts were sourced from public-domain Buddhist corpora, including digitized canonical and commentarial materials.
Data were cleaned, normalized, and manually aligned where necessary.
Problematic or ambiguous samples were discussed collaboratively and excluded when consensus could not be reached.

#### Who are the source data producers?

<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->

Original texts were produced by Buddhist scholars between the 1st millennium BCE and 19th century CE.
Open-source initiatives and Buddhist textual archives prepared digital transcriptions.

### Annotations [optional]

<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->

#### Annotation process

<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->

Domain experts in Sanskrit and Classical Tibetan studies carried out annotations.
Ambiguities and inconsistencies were discussed collaboratively, and annotation guidelines were iteratively refined. 
Disagreements were resolved through group discussion or by excluding samples when consensus was not possible.

#### Who are the annotators?

<!-- This section describes the people or systems who created the annotations. -->

Annotators were scholars and research assistants from the Intellexus Project, with backgrounds in Buddhist studies, linguistics, and computational linguistics.

#### Personal and Sensitive Information

<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->

No personal or sensitive information is contained in the dataset.
All texts are historical and in the public domain.

## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

The dataset represents canonical and scholastic Buddhist materials and may not generalize to colloquial or modern-language use.
Biases inherent in the source texts (e.g., religious, philosophical, or gender-related perspectives) are preserved to maintain their historical authenticity.

Tasks with very short textual inputs can sometimes be resolved through formal cues (e.g., punctuation, structure) rather than deep understanding.
### Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->

Users should be made aware of the dataset's risks, biases, and limitations. Users should interpret model performance cautiously and avoid overgeneralizing results.
DharmaBench is best used for comparative evaluation and fine-tuning in controlled research settings.

## Citation 

<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

If you use DharmaBench in your research, please cite:

```bibtex
@inproceedings{hashiloni-etal-2025-dharmabench,
    title = "{D}harma{B}ench: Evaluating Language Models on Buddhist Texts in {S}anskrit and {T}ibetan",
    author = "Hashiloni, Kai Golan  and
      Cohen, Shay  and
      Shina, Asaf  and
      Yang, Jingyi  and
      Zwebner, Orr Meir  and
      Bajetta, Nicola  and
      Bilitski, Guy  and
      Sund{\'e}n, Rebecca  and
      Maduel, Guy  and
      Conlon, Ryan  and
      Barzilai, Ari  and
      Mass, Daniel  and
      Jia, Shanshan  and
      Naaman, Aviv  and
      Choden, Sonam  and
      Jamtsho, Sonam  and
      Qu, Yadi  and
      Isaacson, Harunaga  and
      Wangchuk, Dorji  and
      Fine, Shai  and
      Almogi, Orna  and
      Bar, Kfir",
    editor = "Inui, Kentaro  and
      Sakti, Sakriani  and
      Wang, Haofen  and
      Wong, Derek F.  and
      Bhattacharyya, Pushpak  and
      Banerjee, Biplab  and
      Ekbal, Asif  and
      Chakraborty, Tanmoy  and
      Singh, Dhirendra Pratap",
    booktitle = "Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics",
    month = dec,
    year = "2025",
    address = "Mumbai, India",
    publisher = "The Asian Federation of Natural Language Processing and The Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.ijcnlp-long.114/",
    pages = "2088--2110",
    ISBN = "979-8-89176-298-5",
    abstract = "We assess the capabilities of large language models on tasks involving Buddhist texts written in Sanskrit and Classical Tibetan{---}two typologically distinct, low-resource historical languages. To this end, we introduce DharmaBench, a benchmark suite comprising 13 classification and detection tasks grounded in Buddhist textual traditions: six in Sanskrit and seven in Tibetan, with four shared across both. The tasks are curated from scratch, tailored to the linguistic and cultural characteristics of each language. We evaluate a range of models, from proprietary systems like GPT-4o to smaller, domain-specific open-weight models, analyzing their performance across tasks and languages. All datasets and code are publicly released, under the CC-BY-4 License and the Apache-2.0 License respectively, to support research on historical language processing and the development of culturally inclusive NLP systems."
}
```

**APA:**

DharmaBench: Evaluating Language Models on Buddhist Texts in Sanskrit and Tibetan (Hashiloni et al., IJCNLP-AACL 2025)


## Dataset Card Authors 

Kai Golan Hashiloni (Intellexus Project)
With contributions from the Intellexus Sanskrit and Tibetan research teams.

## Dataset Card Contact
For questions or contributions: [kai.golanhashiloni@post.runi.ac.il](mailto:kai.golanhashiloni@post.runi.ac.il?subject=DharmaBench)