Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowNotImplementedError
Message:      Cannot write struct type 'attributes' with no child field to Parquet. Consider adding a dummy child field.
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1890, in _prepare_split_single
                  writer.write_table(table)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 758, in write_table
                  self._build_writer(inferred_schema=pa_table.schema)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 799, in _build_writer
                  self.pa_writer = pq.ParquetWriter(
                                   ^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 1070, in __init__
                  self.writer = _parquet.ParquetWriter(
                                ^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/_parquet.pyx", line 2363, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'attributes' with no child field to Parquet. Consider adding a dummy child field.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1911, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                                            ^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 773, in finalize
                  self._build_writer(self.schema)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 799, in _build_writer
                  self.pa_writer = pq.ParquetWriter(
                                   ^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 1070, in __init__
                  self.writer = _parquet.ParquetWriter(
                                ^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/_parquet.pyx", line 2363, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'attributes' with no child field to Parquet. Consider adding a dummy child field.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1342, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 907, in stream_convert_to_parquet
                  builder._prepare_split(split_generator=splits_generators[split], file_format="parquet")
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1739, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1922, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

shape
list
data_type
string
chunk_grid
dict
chunk_key_encoding
dict
fill_value
int64
codecs
list
attributes
dict
zarr_format
int64
node_type
string
storage_transformers
list
[ 300001, 16 ]
uint8
{ "name": "regular", "configuration": { "chunk_shape": [ 300001, 16 ] } }
{ "name": "default", "configuration": { "separator": "/" } }
0
[ { "name": "bytes", "configuration": null }, { "name": "zstd", "configuration": { "level": 0, "checksum": true } } ]
{}
3
array
[]
[ 300001, 16 ]
uint8
{ "name": "regular", "configuration": { "chunk_shape": [ 300001, 16 ] } }
{ "name": "default", "configuration": { "separator": "/" } }
0
[ { "name": "bytes", "configuration": null }, { "name": "zstd", "configuration": { "level": 0, "checksum": true } } ]
{}
3
array
[]
[ 300001, 16 ]
uint8
{ "name": "regular", "configuration": { "chunk_shape": [ 300001, 16 ] } }
{ "name": "default", "configuration": { "separator": "/" } }
0
[ { "name": "bytes", "configuration": null }, { "name": "zstd", "configuration": { "level": 0, "checksum": true } } ]
{}
3
array
[]
[ 300001, 1 ]
uint8
{ "name": "regular", "configuration": { "chunk_shape": [ 300001, 1 ] } }
{ "name": "default", "configuration": { "separator": "/" } }
0
[ { "name": "bytes", "configuration": null }, { "name": "zstd", "configuration": { "level": 0, "checksum": true } } ]
{}
3
array
[]
[ 300001, 1 ]
uint8
{ "name": "regular", "configuration": { "chunk_shape": [ 300001, 1 ] } }
{ "name": "default", "configuration": { "separator": "/" } }
0
[ { "name": "bytes", "configuration": null }, { "name": "zstd", "configuration": { "level": 0, "checksum": true } } ]
{}
3
array
[]
null
null
null
null
null
null
{}
3
group
null
[ 300001, 250000 ]
int8
{ "name": "regular", "configuration": { "chunk_shape": [ 300001, 25 ] } }
{ "name": "default", "configuration": { "separator": "/" } }
0
[ { "name": "bytes", "configuration": null }, { "name": "zstd", "configuration": { "level": 0, "checksum": true } } ]
{}
3
array
[]
null
null
null
null
null
null
{}
3
group
null

ascad-v1-vk

This script downloads, extracts, and uploads the optimized ASCAD v1 Variable Key dataset to Hugging Face Hub.

Dataset Structure

This dataset is stored in Zarr format, optimized for chunked and compressed cloud storage.

Traces (/traces)

  • Shape: [300001, 250000] (Traces x Time Samples)
  • Data Type: int8
  • Chunk Shape: [300001, 25]

Metadata (/metadata)

  • key: shape [300001, 16], dtype uint8
  • mask: shape [300001, 16], dtype uint8
  • plaintext: shape [300001, 16], dtype uint8
  • rin: shape [300001, 1], dtype uint8
  • rout: shape [300001, 1], dtype uint8

Parameters Used for Generation

  • HF_ORG: DLSCA
  • CHUNK_SIZE_Y: 300001
  • CHUNK_SIZE_X: 25
  • TOTAL_CHUNKS_ON_Y: 1
  • TOTAL_CHUNKS_ON_X: 10000
  • NUM_JOBS: 10
  • CAN_RUN_LOCALLY: True
  • CAN_RUN_ON_CLOUD: False
  • COMPRESSED: True

Usage

You can load this dataset directly using Zarr and Hugging Face File System:

import zarr
from huggingface_hub import HfFileSystem

fs = HfFileSystem()

# Map only once to the dataset root
root = zarr.open_group(fs.get_mapper("datasets/DLSCA/ascad-v1-vk"), mode="r")

# Access traces directly
traces = root["traces"]
print("Traces shape:", traces.shape)

# Access plaintext metadata directly
plaintext = root["metadata"]["plaintext"]
print("Plaintext shape:", plaintext.shape)
Downloads last month
11