Dataset Viewer
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code: StreamingRowsError
Exception: CastError
Message: Couldn't cast
model_name: string
k_vector: list<item: double>
child 0, item: double
species_list: list<item: string>
child 0, item: string
species_map: struct<P: int64, S: int64, cat: int64, catS: int64, inactive_catI: int64, inhibitor: int64, inactive (... 297 chars omitted)
child 0, P: int64
child 1, S: int64
child 2, cat: int64
child 3, catS: int64
child 4, inactive_catI: int64
child 5, inhibitor: int64
child 6, inactive_catS: int64
child 7, inactive_catP: int64
child 8, inactive_cat2: int64
child 9, inactive_catSI: int64
child 10, inactive_catS2: int64
child 11, inactive_catSP: int64
child 12, inactive_cat2S2: int64
child 13, inactive_cat2S: int64
child 14, inactive_cat: int64
child 15, cat2: int64
child 16, (cat)2S: int64
child 17, X: int64
child 18, catP: int64
child 19, cat*: int64
child 20, cat*S: int64
child 21, catS2: int64
child 22, L: int64
reaction_parameter_keys: list<item: string>
child 0, item: string
config: struct<s0_ref: double, max_t: int64>
child 0, s0_ref: double
child 1, max_t: int64
validation_summary: struct<attempts_until_success: int64, num_validation_conditions: int64, required_passes: int64, pass (... 122 chars omitted)
child 0, attempts_until_success: int64
child 1, num_validation_conditions: int64
child 2, required_passes: int64
child 3, passed_conditions: int64
child 4, passed_fraction: double
child 5, best_passed_conditions_before_success: int64
child 6, used_condition_checker: bool
sampl
...
0, regex: string
child 4, references: struct<field: struct<@id: string>>
child 0, field: struct<@id: string>
child 0, @id: string
child 5, isArray: bool
child 6, arrayShape: string
child 7, subField: list<item: struct<@type: string, @id: string, dataType: string, source: struct<fileSet: struct<@id: (... 80 chars omitted)
child 0, item: struct<@type: string, @id: string, dataType: string, source: struct<fileSet: struct<@id: string>, ex (... 68 chars omitted)
child 0, @type: string
child 1, @id: string
child 2, dataType: string
child 3, source: struct<fileSet: struct<@id: string>, extract: struct<column: string>, transform: struct<jsonPath: st (... 6 chars omitted)
child 0, fileSet: struct<@id: string>
child 0, @id: string
child 1, extract: struct<column: string>
child 0, column: string
child 2, transform: struct<jsonPath: string>
child 0, jsonPath: string
child 7, data: list<item: struct<default_splits/split_name: string>>
child 0, item: struct<default_splits/split_name: string>
child 0, default_splits/split_name: string
rai:dataLimitations: list<item: string>
child 0, item: string
to
{'@context': {'@language': Value('string'), '@vocab': Value('string'), 'arrayShape': Value('string'), 'citeAs': Value('string'), 'column': Value('string'), 'conformsTo': Value('string'), 'containedIn': Value('string'), 'cr': Value('string'), 'data': {'@id': Value('string'), '@type': Value('string')}, 'dataBiases': Value('string'), 'dataCollection': Value('string'), 'dataType': {'@id': Value('string'), '@type': Value('string')}, 'dct': Value('string'), 'extract': Value('string'), 'field': Value('string'), 'fileProperty': Value('string'), 'fileObject': Value('string'), 'fileSet': Value('string'), 'format': Value('string'), 'includes': Value('string'), 'isArray': Value('string'), 'isLiveDataset': Value('string'), 'jsonPath': Value('string'), 'key': Value('string'), 'md5': Value('string'), 'parentField': Value('string'), 'path': Value('string'), 'personalSensitiveInformation': Value('string'), 'recordSet': Value('string'), 'references': Value('string'), 'regex': Value('string'), 'repeated': Value('string'), 'replace': Value('string'), 'sc': Value('string'), 'separator': Value('string'), 'source': Value('string'), 'subField': Value('string'), 'transform': Value('string'), 'rai': Value('string'), 'prov': Value('string')}, '@type': Value('string'), 'distribution': List({'@type': Value('string'), '@id': Value('string'), 'name': Value('string'), 'description': Value('string'), 'contentUrl': Value('string'), 'encodingFormat': Value('string'), 'sha256': Value('string'), 'containedIn': {
...
et': List({'@type': Value('string'), 'dataType': Value('string'), 'key': {'@id': Value('string')}, '@id': Value('string'), 'name': Value('string'), 'description': Value('string'), 'field': List({'@type': Value('string'), '@id': Value('string'), 'dataType': Value('string'), 'source': {'fileSet': {'@id': Value('string')}, 'extract': {'fileProperty': Value('string'), 'column': Value('string')}, 'transform': {'regex': Value('string')}}, 'references': {'field': {'@id': Value('string')}}, 'isArray': Value('bool'), 'arrayShape': Value('string'), 'subField': List({'@type': Value('string'), '@id': Value('string'), 'dataType': Value('string'), 'source': {'fileSet': {'@id': Value('string')}, 'extract': {'column': Value('string')}, 'transform': {'jsonPath': Value('string')}}})}), 'data': List({'default_splits/split_name': Value('string')})}), 'conformsTo': Value('string'), 'name': Value('string'), 'description': Value('string'), 'alternateName': List(Value('string')), 'creator': {'@type': Value('string'), 'name': Value('string'), 'url': Value('string')}, 'keywords': List(Value('string')), 'license': Value('string'), 'url': Value('string'), 'rai:dataLimitations': List(Value('string')), 'rai:dataBiases': List(Value('string')), 'rai:personalSensitiveInformation': Value('string'), 'rai:dataUseCases': List(Value('string')), 'rai:dataSocialImpact': Value('string'), 'rai:hasSyntheticData': Value('bool'), 'prov:wasDerivedFrom': List(Value('string')), 'prov:wasGeneratedBy': List(Value('string'))}
because column names don't match
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
return get_rows(
^^^^^^^^^
File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
for key, example in ex_iterable:
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
for key, pa_table in self.ex_iterable._iter_arrow():
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
for key, pa_table in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
for key, pa_table in self.generate_tables_fn(**gen_kwags):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 299, in _generate_tables
self._cast_table(pa_table, json_field_paths=json_field_paths),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 128, in _cast_table
pa_table = table_cast(pa_table, self.info.features.arrow_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2321, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2249, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
model_name: string
k_vector: list<item: double>
child 0, item: double
species_list: list<item: string>
child 0, item: string
species_map: struct<P: int64, S: int64, cat: int64, catS: int64, inactive_catI: int64, inhibitor: int64, inactive (... 297 chars omitted)
child 0, P: int64
child 1, S: int64
child 2, cat: int64
child 3, catS: int64
child 4, inactive_catI: int64
child 5, inhibitor: int64
child 6, inactive_catS: int64
child 7, inactive_catP: int64
child 8, inactive_cat2: int64
child 9, inactive_catSI: int64
child 10, inactive_catS2: int64
child 11, inactive_catSP: int64
child 12, inactive_cat2S2: int64
child 13, inactive_cat2S: int64
child 14, inactive_cat: int64
child 15, cat2: int64
child 16, (cat)2S: int64
child 17, X: int64
child 18, catP: int64
child 19, cat*: int64
child 20, cat*S: int64
child 21, catS2: int64
child 22, L: int64
reaction_parameter_keys: list<item: string>
child 0, item: string
config: struct<s0_ref: double, max_t: int64>
child 0, s0_ref: double
child 1, max_t: int64
validation_summary: struct<attempts_until_success: int64, num_validation_conditions: int64, required_passes: int64, pass (... 122 chars omitted)
child 0, attempts_until_success: int64
child 1, num_validation_conditions: int64
child 2, required_passes: int64
child 3, passed_conditions: int64
child 4, passed_fraction: double
child 5, best_passed_conditions_before_success: int64
child 6, used_condition_checker: bool
sampl
...
0, regex: string
child 4, references: struct<field: struct<@id: string>>
child 0, field: struct<@id: string>
child 0, @id: string
child 5, isArray: bool
child 6, arrayShape: string
child 7, subField: list<item: struct<@type: string, @id: string, dataType: string, source: struct<fileSet: struct<@id: (... 80 chars omitted)
child 0, item: struct<@type: string, @id: string, dataType: string, source: struct<fileSet: struct<@id: string>, ex (... 68 chars omitted)
child 0, @type: string
child 1, @id: string
child 2, dataType: string
child 3, source: struct<fileSet: struct<@id: string>, extract: struct<column: string>, transform: struct<jsonPath: st (... 6 chars omitted)
child 0, fileSet: struct<@id: string>
child 0, @id: string
child 1, extract: struct<column: string>
child 0, column: string
child 2, transform: struct<jsonPath: string>
child 0, jsonPath: string
child 7, data: list<item: struct<default_splits/split_name: string>>
child 0, item: struct<default_splits/split_name: string>
child 0, default_splits/split_name: string
rai:dataLimitations: list<item: string>
child 0, item: string
to
{'@context': {'@language': Value('string'), '@vocab': Value('string'), 'arrayShape': Value('string'), 'citeAs': Value('string'), 'column': Value('string'), 'conformsTo': Value('string'), 'containedIn': Value('string'), 'cr': Value('string'), 'data': {'@id': Value('string'), '@type': Value('string')}, 'dataBiases': Value('string'), 'dataCollection': Value('string'), 'dataType': {'@id': Value('string'), '@type': Value('string')}, 'dct': Value('string'), 'extract': Value('string'), 'field': Value('string'), 'fileProperty': Value('string'), 'fileObject': Value('string'), 'fileSet': Value('string'), 'format': Value('string'), 'includes': Value('string'), 'isArray': Value('string'), 'isLiveDataset': Value('string'), 'jsonPath': Value('string'), 'key': Value('string'), 'md5': Value('string'), 'parentField': Value('string'), 'path': Value('string'), 'personalSensitiveInformation': Value('string'), 'recordSet': Value('string'), 'references': Value('string'), 'regex': Value('string'), 'repeated': Value('string'), 'replace': Value('string'), 'sc': Value('string'), 'separator': Value('string'), 'source': Value('string'), 'subField': Value('string'), 'transform': Value('string'), 'rai': Value('string'), 'prov': Value('string')}, '@type': Value('string'), 'distribution': List({'@type': Value('string'), '@id': Value('string'), 'name': Value('string'), 'description': Value('string'), 'contentUrl': Value('string'), 'encodingFormat': Value('string'), 'sha256': Value('string'), 'containedIn': {
...
et': List({'@type': Value('string'), 'dataType': Value('string'), 'key': {'@id': Value('string')}, '@id': Value('string'), 'name': Value('string'), 'description': Value('string'), 'field': List({'@type': Value('string'), '@id': Value('string'), 'dataType': Value('string'), 'source': {'fileSet': {'@id': Value('string')}, 'extract': {'fileProperty': Value('string'), 'column': Value('string')}, 'transform': {'regex': Value('string')}}, 'references': {'field': {'@id': Value('string')}}, 'isArray': Value('bool'), 'arrayShape': Value('string'), 'subField': List({'@type': Value('string'), '@id': Value('string'), 'dataType': Value('string'), 'source': {'fileSet': {'@id': Value('string')}, 'extract': {'column': Value('string')}, 'transform': {'jsonPath': Value('string')}}})}), 'data': List({'default_splits/split_name': Value('string')})}), 'conformsTo': Value('string'), 'name': Value('string'), 'description': Value('string'), 'alternateName': List(Value('string')), 'creator': {'@type': Value('string'), 'name': Value('string'), 'url': Value('string')}, 'keywords': List(Value('string')), 'license': Value('string'), 'url': Value('string'), 'rai:dataLimitations': List(Value('string')), 'rai:dataBiases': List(Value('string')), 'rai:personalSensitiveInformation': Value('string'), 'rai:dataUseCases': List(Value('string')), 'rai:dataSocialImpact': Value('string'), 'rai:hasSyntheticData': Value('bool'), 'prov:wasDerivedFrom': List(Value('string')), 'prov:wasGeneratedBy': List(Value('string'))}
because column names don't matchNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
HypothesisDrivenExperimentPlanningBench
This dataset contains 200 synthetic ground-truth samples for benchmarking experimental design in kinetic mechanism identification.
Each record corresponds to one sampled benchmark instance and includes:
model_name: mechanism family identifier (M1toM20)k_vector: sampled kinetic constantsspecies_list: species names used in the mechanismspecies_map: mapping from species name to indexreaction_parameter_keys: parameter names associated with the kinetic modelconfig: generation settingsvalidation_summary: summary of the benchmark validation proceduresample_id,sample_index,source_file: bookkeeping fields for traceability
File
HypothesisDrivenExperimentPlanningBench.jsonl: one JSON record per line
Notes
- This is a synthetic benchmark dataset generated from predefined kinetic mechanism templates and simulator-based validation.
- It is intended for benchmarking and methodology research, not as a direct substitute for real laboratory data.
Source
- Hosted dataset:
xq8wvm/HypothesisDrivenExperimentPlanningBench
License
This dataset is released under the Creative Commons Attribution 4.0 International license (CC-BY-4.0).
- Downloads last month
- 21