Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code: DatasetGenerationError
Exception: ArrowNotImplementedError
Message: Cannot write struct type 'attributes' with no child field to Parquet. Consider adding a dummy child field.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1890, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 758, in write_table
self._build_writer(inferred_schema=pa_table.schema)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 799, in _build_writer
self.pa_writer = pq.ParquetWriter(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 1070, in __init__
self.writer = _parquet.ParquetWriter(
^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/_parquet.pyx", line 2363, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'attributes' with no child field to Parquet. Consider adding a dummy child field.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1911, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 773, in finalize
self._build_writer(self.schema)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 799, in _build_writer
self.pa_writer = pq.ParquetWriter(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 1070, in __init__
self.writer = _parquet.ParquetWriter(
^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/_parquet.pyx", line 2363, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'attributes' with no child field to Parquet. Consider adding a dummy child field.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1342, in compute_config_parquet_and_info_response
parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 907, in stream_convert_to_parquet
builder._prepare_split(split_generator=splits_generators[split], file_format="parquet")
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1739, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1922, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
shape list | data_type string | chunk_grid dict | chunk_key_encoding dict | fill_value int64 | codecs list | attributes dict | zarr_format int64 | node_type string | storage_transformers list |
|---|---|---|---|---|---|---|---|---|---|
[
100000,
16
] | uint8 | {
"name": "regular",
"configuration": {
"chunk_shape": [
50000,
8
]
}
} | {
"name": "default",
"configuration": {
"separator": "/"
}
} | 0 | [
{
"name": "bytes",
"configuration": null
},
{
"name": "zstd",
"configuration": {
"level": 0,
"checksum": true
}
}
] | {} | 3 | array | [] |
[
100000,
16
] | uint8 | {
"name": "regular",
"configuration": {
"chunk_shape": [
50000,
8
]
}
} | {
"name": "default",
"configuration": {
"separator": "/"
}
} | 0 | [
{
"name": "bytes",
"configuration": null
},
{
"name": "zstd",
"configuration": {
"level": 0,
"checksum": true
}
}
] | {} | 3 | array | [] |
[
100000,
16
] | uint8 | {
"name": "regular",
"configuration": {
"chunk_shape": [
50000,
8
]
}
} | {
"name": "default",
"configuration": {
"separator": "/"
}
} | 0 | [
{
"name": "bytes",
"configuration": null
},
{
"name": "zstd",
"configuration": {
"level": 0,
"checksum": true
}
}
] | {} | 3 | array | [] |
[
100000,
16
] | uint8 | {
"name": "regular",
"configuration": {
"chunk_shape": [
50000,
8
]
}
} | {
"name": "default",
"configuration": {
"separator": "/"
}
} | 0 | [
{
"name": "bytes",
"configuration": null
},
{
"name": "zstd",
"configuration": {
"level": 0,
"checksum": true
}
}
] | {} | 3 | array | [] |
[
100000,
16
] | uint8 | {
"name": "regular",
"configuration": {
"chunk_shape": [
50000,
8
]
}
} | {
"name": "default",
"configuration": {
"separator": "/"
}
} | 0 | [
{
"name": "bytes",
"configuration": null
},
{
"name": "zstd",
"configuration": {
"level": 0,
"checksum": true
}
}
] | {} | 3 | array | [] |
[
100000,
1
] | uint8 | {
"name": "regular",
"configuration": {
"chunk_shape": [
100000,
1
]
}
} | {
"name": "default",
"configuration": {
"separator": "/"
}
} | 0 | [
{
"name": "bytes",
"configuration": null
},
{
"name": "zstd",
"configuration": {
"level": 0,
"checksum": true
}
}
] | {} | 3 | array | [] |
[
100000,
1
] | uint8 | {
"name": "regular",
"configuration": {
"chunk_shape": [
100000,
1
]
}
} | {
"name": "default",
"configuration": {
"separator": "/"
}
} | 0 | [
{
"name": "bytes",
"configuration": null
},
{
"name": "zstd",
"configuration": {
"level": 0,
"checksum": true
}
}
] | {} | 3 | array | [] |
[
100000,
1
] | uint8 | {
"name": "regular",
"configuration": {
"chunk_shape": [
100000,
1
]
}
} | {
"name": "default",
"configuration": {
"separator": "/"
}
} | 0 | [
{
"name": "bytes",
"configuration": null
},
{
"name": "zstd",
"configuration": {
"level": 0,
"checksum": true
}
}
] | {} | 3 | array | [] |
[
100000,
1
] | uint8 | {
"name": "regular",
"configuration": {
"chunk_shape": [
100000,
1
]
}
} | {
"name": "default",
"configuration": {
"separator": "/"
}
} | 0 | [
{
"name": "bytes",
"configuration": null
},
{
"name": "zstd",
"configuration": {
"level": 0,
"checksum": true
}
}
] | {} | 3 | array | [] |
[
100000,
1
] | uint8 | {
"name": "regular",
"configuration": {
"chunk_shape": [
100000,
1
]
}
} | {
"name": "default",
"configuration": {
"separator": "/"
}
} | 0 | [
{
"name": "bytes",
"configuration": null
},
{
"name": "zstd",
"configuration": {
"level": 0,
"checksum": true
}
}
] | {} | 3 | array | [] |
[
100000,
1
] | uint8 | {
"name": "regular",
"configuration": {
"chunk_shape": [
100000,
1
]
}
} | {
"name": "default",
"configuration": {
"separator": "/"
}
} | 0 | [
{
"name": "bytes",
"configuration": null
},
{
"name": "zstd",
"configuration": {
"level": 0,
"checksum": true
}
}
] | {} | 3 | array | [] |
null | null | null | null | null | null | {} | 3 | group | null |
[
100000,
1000000
] | int8 | {
"name": "regular",
"configuration": {
"chunk_shape": [
50000,
200
]
}
} | {
"name": "default",
"configuration": {
"separator": "/"
}
} | 0 | [
{
"name": "bytes",
"configuration": null
},
{
"name": "zstd",
"configuration": {
"level": 0,
"checksum": true
}
}
] | {} | 3 | array | [] |
null | null | null | null | null | null | {} | 3 | group | null |
ascad-v2-1
This script downloads, extracts, and uploads the optimized ASCAD v2 (1-100k traces) dataset to Hugging Face Hub.
Dataset Structure
This dataset is stored in Zarr format, optimized for chunked and compressed cloud storage.
Traces (/traces)
- Shape:
[100000, 1000000](Traces x Time Samples) - Data Type:
int8 - Chunk Shape:
[50000, 200]
Metadata (/metadata)
- ciphertext: shape
[100000, 16], dtypeuint8 - key: shape
[100000, 16], dtypeuint8 - mask: shape
[100000, 16], dtypeuint8 - mask_: shape
[100000, 16], dtypeuint8 - plaintext: shape
[100000, 16], dtypeuint8 - rin: shape
[100000, 1], dtypeuint8 - rin_: shape
[100000, 1], dtypeuint8 - rm: shape
[100000, 1], dtypeuint8 - rm_: shape
[100000, 1], dtypeuint8 - rout: shape
[100000, 1], dtypeuint8 - rout_: shape
[100000, 1], dtypeuint8
Parameters Used for Generation
- HF_ORG:
DLSCA - CHUNK_SIZE_Y:
50000 - CHUNK_SIZE_X:
200 - TOTAL_CHUNKS_Y:
2 - TOTAL_CHUNKS_X:
5000 - NUM_JOBS:
10 - LOCAL:
True - CLOUD:
False - COMPRESSED:
True
Usage
You can load this dataset directly using Zarr and Hugging Face File System:
import zarr
from huggingface_hub import HfFileSystem
fs = HfFileSystem()
# Map only once to the dataset root
root = zarr.open_group(fs.get_mapper("datasets/DLSCA/ascad-v2-1"), mode="r")
# Access traces directly
traces = root["traces"]
print("Traces shape:", traces.shape)
# Access plaintext metadata directly
plaintext = root["metadata"]["plaintext"]
print("Plaintext shape:", plaintext.shape)
- Downloads last month
- 13