Datasets:
The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: UnicodeDecodeError
Message: 'utf-8' codec can't decode byte 0xe1 in position 144259: invalid continuation byte
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 246, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 4196, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2533, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2711, in iter
for key, pa_table in ex_iterable.iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2249, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/csv/csv.py", line 196, in _generate_tables
csv_file_reader = pd.read_csv(file, iterator=True, dtype=dtype, **self.config.pd_read_csv_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/streaming.py", line 73, in wrapper
return function(*args, download_config=download_config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 1250, in xpandas_read_csv
return pd.read_csv(xopen(filepath_or_buffer, "rb", download_config=download_config), **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1026, in read_csv
return _read(filepath_or_buffer, kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 620, in _read
parser = TextFileReader(filepath_or_buffer, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1620, in __init__
self._engine = self._make_engine(f, self.engine)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/readers.py", line 1898, in _make_engine
return mapping[engine](f, **self.options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 93, in __init__
self._reader = parsers.TextReader(src, **kwds)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pandas/_libs/parsers.pyx", line 574, in pandas._libs.parsers.TextReader.__cinit__
File "pandas/_libs/parsers.pyx", line 663, in pandas._libs.parsers.TextReader._get_header
File "pandas/_libs/parsers.pyx", line 874, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas/_libs/parsers.pyx", line 891, in pandas._libs.parsers.TextReader._check_tokenize_status
File "pandas/_libs/parsers.pyx", line 2053, in pandas._libs.parsers.raise_parser_error
File "<frozen codecs>", line 322, in decode
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe1 in position 144259: invalid continuation byteNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
An Analysis of Active Learning Algorithms using Real-World Crowd-sourced Text Annotations
Paper and repository
This repository accompanies a recently accepted paper and contains the cleaned datasets, collected annotator files, and scripts to reproduce the experimental splits used in the study.
Datasets (summary)
We used three benchmark text-classification datasets:
- AG News (4 classes): world, sports, business, science/technology.
- Consumer Complaints (6 classes): debt collection, prepaid card/debit card, mortgage, checking/savings account, student loan, vehicle loan/lease. (Source: https://catalog.data.gov/dataset/consumer-complaint-database)
- Wikipedia Movie Plots (4 classes β movie genres): drama, comedy, horror, action. (Source: https://www.kaggle.com/datasets/jrobischon/wikipedia-movie-plots)
For each dataset we randomly sampled 3,000 examples and collected annotations via Upwork/Amazon Mechanical Turk from multiple distinct workers (10 annotators per sampled set, Wikipedia includes 9 annotator files in this repo). Annotators could abstain by entering label 0. The protocol received ethical review and participant consent; no worker identities were recorded.
Folder structure & key files
IJCNN_2026/
- Contains the Supplemental file for the IJCNN 2026 main paper
Data_AGNewsGroups/
- Cleaned_AG_News_Dataset_3_columns_ALL.xlsx β Main cleaned dataset (Index, Description, Class Index).
- Annotations/ β Human annotation files (AG_Upwork_.xlsx).
Data_ConsumerComplaints/
- Cleaned_Dataset_All.xlsx β Main cleaned dataset (Index, Consumer complaint narrative, Product).
- Annotations/ β Human annotation files (CC_Upwork_.xlsx).
Data_WikipediaMoviePlots/
- Cleaned_Dataset_All.csv β Main cleaned dataset (Index, Plot, Genre).
- Annotations/ β Human annotation files (Wiki_Upwork_.csv) β nine annotator files provided.
All dataset files in this repo use an "Index" column (unique id), a text column standardized as "Description", and a label column standardized as "Labels".
Directory structure
A concise view of the repository layout:
.
ββ README.md
ββ IJCNN Paper/
β ββ IJCNN_2026_Supplemental.pdf
ββ Data_AGNewsGroups/
β ββ Cleaned_AG_News_Dataset_3_columns_ALL.xlsx
β ββ Annotations/
ββ Data_ConsumerComplaints/
β ββ Cleaned_Dataset_All.xlsx
β ββ Annotations/
ββ Data_WikipediaMoviePlots/
β ββ Cleaned_Dataset_All.csv
β ββ Annotations/
.
Use the provided cleaned files and annotation folders to reproduce the splits and experiments described in the paper. Scripts in the codebase expect the file names and columns described above.
Citation & contact
If you use this repository or the collected annotations, please cite the accompanying paper and acknowledge the dataset sources.
Paper citation (BibTeX)
The paper has been accepted to WCCI 2026 - IJCNN (to appear). Use the following BibTeX entry:
@InProceedings{al_rcta2026,
author = {Varun Totakura and Ankita Singh and Yushun Dong and Shayok Chakraborty},
title = {{An Analysis of Active Learning Algorithms using Real-World Crowd-sourced Text Annotations}},
booktitle = {Proceedings of the IEEE World Congress on Computational Intelligence (WCCI) -- International Joint Conference on Neural Networks (IJCNN)},
month = {June},
year = {2026},
note = {Accepted for publication}
}
Creative Commons Attribution 4.0 International (CC BY 4.0)
Copyright (c) 2026 Varun Totakura
This work is licensed under the Creative Commons Attribution 4.0 International License.
You are free to:
- Share β copy and redistribute the material in any medium or format
- Adapt β remix, transform, and build upon the material for any purpose, even commercially
Under the following terms:
- Attribution β You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
- No additional restrictions β You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.
Full license text: https://creativecommons.org/licenses/by/4.0/
- Downloads last month
- 92