The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ArrowInvalid
Message: JSON parse error: Column() changed from object to array in row 0
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 183, in _generate_tables
df = pandas_read_json(f)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 38, in pandas_read_json
return pd.read_json(path_or_buf, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 815, in read_json
return json_reader.read()
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1014, in read
obj = self._get_object_parser(self.data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1040, in _get_object_parser
obj = FrameParser(json, **kwargs).parse()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1176, in parse
self._parse()
File "/usr/local/lib/python3.12/site-packages/pandas/io/json/_json.py", line 1392, in _parse
ujson_loads(json, precise_float=self.precise_float), dtype=None
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ValueError: Trailing data
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3608, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2368, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2573, in iter
for key, example in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2082, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 544, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 383, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 186, in _generate_tables
raise e
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 160, in _generate_tables
pa_table = paj.read_json(
^^^^^^^^^^^^^^
File "pyarrow/_json.pyx", line 342, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: JSON parse error: Column() changed from object to array in row 0Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Topical-Chat
We introduce Topical-Chat, a knowledge-grounded human-human conversation dataset where the underlying knowledge spans 8 broad topics and conversation partners don’t have explicitly defined roles.
Topical-Chat broadly consists of two types of files:
- Conversations: JSON files containing conversations between pairs of Amazon Mechanical Turk workers.
- Reading Sets: JSON files containing knowledge sections rendered as reading content to the Turkers having conversations.
For detailed information about the dataset, modeling benchmarking experiments and evaluation results, please refer to our paper.
Dataset
Statistics:
| Stat | Train | Valid Freq. | Valid Rare | Test Freq. | Test Rare | All |
|---|---|---|---|---|---|---|
| # of conversations | 8628 | 539 | 539 | 539 | 539 | 10784 |
| # of utterances | 188378 | 11681 | 11692 | 11760 | 11770 | 235281 |
| average # of turns per conversation | 21.8 | 21.6 | 21.7 | 21.8 | 21.8 | 21.8 |
| average length of utterance | 19.5 | 19.8 | 19.8 | 19.5 | 19.5 | 19.6 |
Split:
The data is split into 5 distinct groups: train, valid frequent, valid rare, test frequent and test rare. The frequent set contains entities frequently seen in the training set. The rare set contains entities that were infrequently seen in the training set.
Configuration Type:
For each conversation to be collected, we applied a random knowledge configuration from a pre-defined list of configurations, to construct a pair of reading sets to be rendered to the partnered Turkers. Configurations were defined to impose varying degrees of knowledge symmetry or asymmetry between partner Turkers, leading to the collection of a wide variety of conversations.
Conversations:
Each JSONL file in conversations/ has the following
format:
{
<conversation_id>: {
“article_url”: <article url>,
“config”: <config>, # one of A, B, C, D
“content”: [ # ordered list of conversation turns
{
“agent”: “agent_1”, # or “agent_2”,
“message” : <message text>,
“sentiment”: <text>,
“knowledge_source” : [“AS1”, “Personal Knowledge”, ...],
“turn_rating”: “Poor”,
},…
],
“conversation_rating”: {
“agent_1”: “Good”,
“agent_2”: “Excellent”
}
},…
}
- conversation_id: A unique identifier for a conversation in Topical-Chat
- article_url: URL pointing to the Washington Post article associated with a conversation
- config: The knowledge configuration applied to obtain a pair of reading sets for a conversation
- content: An ordered list of conversation turns
- agent: An identifier for the Turker who generated the message
- message: The message generated by the agent
- sentiment: Self-annotation of the sentiment of the message
- knowledge_source: Self-annotation of the section within the agent's reading set used to generate this message
- turn_rating: Partner-annotation of the quality of the message
- conversation_rating: Self-annotation of the quality of the conversation
- agent_1: Rating of the conversation by Turker 1
- agent_2: Rating of the conversation by Turker 2
- conversation_id: A unique identifier for a conversation in
Topical-Chat
- config: The knowledge configuration applied to obtain a pair of
reading sets for a conversation
- agent_{1/2}: Contains the factual sections in this agent's reading set
- FS{1/2/3}: Identifier for a factual section
- entity: A real-world entity
- shortened_wiki_lead_section: A shortened version of the
Wikipedia lead section of the entity
- summarized_wiki_lead_section: A (TextRank) summarized version
of the Wikipedia lead section of the entity
- fun_facts: Crowdsourced and manually curated fun facts about
the entity from Reddit's r/todayilearned subreddit
- article: A Washington Post article common to both partners'
reading sets
- url: URL pointing to the Washington Post article associated
with a conversation
- headline: The headline of the Washington Post article
- AS{1/2/3/4}: A chunk of the body of the Washington Post article
Citation
If you use Topical-Chat in your work, please cite with the following:
@inproceedings{gopalakrishnan2019topical,
author={Karthik Gopalakrishnan and Behnam Hedayatnia and Qinlang Chen and Anna Gottardi and Sanjeev Kwatra and Anu Venkatesh and Raefer Gabriel and Dilek Hakkani-Tür},
title={{Topical-Chat: Towards Knowledge-Grounded Open-Domain Conversations}},
year=2019,
booktitle={Proc. Interspeech 2019},
pages={1891--1895},
doi={10.21437/Interspeech.2019-3079},
url={http://dx.doi.org/10.21437/Interspeech.2019-3079}
}
Gopalakrishnan, Karthik, et al. "Topical-Chat: Towards Knowledge-Grounded Open-Domain Conversations.", Proc. INTERSPEECH 2019
Acknowledgements
We thank Anju Khatri, Anjali Chadha and Mohammad Shami for their help with the public release of the dataset. We thank Jeff Nunn and Yi Pan for their early contributions to the dataset collection.
- Downloads last month
- 37


