original_columns
stringlengths 1
4
| row_0
dict |
|---|---|
0
| {"text":"\n\n\n\n\n\n\n\n\n\n\n\n\n\nThe Old Testament of the King James Version of the Bible\n\n\n\(...TRUNCATED)
|
1
| {"text":"\n\n\n\nProduced by World Library, Inc., from their Library of the Future\n\n\n\n\nThis is (...TRUNCATED)
|
2
| {"text":"\n\n\n\nProduced by Suzanne Shell, Sjaani and PG Distributed Proofreaders\n\n\n\n\nTHE HOUS(...TRUNCATED)
|
3
| {"text":"\n\n\n\nThis file was produced from images generously made available by the Bibliotheque na(...TRUNCATED)
|
4
| {"text":"\n\n\n\nProduced by Christine De Ryck, Stig M. Valstad, Suzanne L. Shell\nand PG Distribute(...TRUNCATED)
|
5
| {"text":"\n\n\n\nProduced by Suzanne Shell, Danny Wool, Luiz Antonio de Souza,\nElisa Williams, Tony(...TRUNCATED)
|
6
| {"text":"\n\n\n\nProduced by Michael Lockey and PG Distributed Proofreaders\n\n\n\n\n[Illustration: (...TRUNCATED)
|
7
| {"text":"\n\n\n\nProduced by Beth Trapaga and PG Distributed Proofreaders\n\n\n\n\nTHE MOUNTAINS OF (...TRUNCATED)
|
8
| {"text":"\n\n\nE-text prepared by Suzanne Shell, Sarah Lewis, and Project Gutenberg\nDistributed Pro(...TRUNCATED)
|
9
| {"text":"\n\n\n\nProduced by Suzanne Shell, Josephine Paolucci\nand PG Distributed Proofreaders\n\n\(...TRUNCATED)
|
Dataset Card for pg19-64k-6400
This dataset is a large-scale curated subset of the PG-19 dataset (a comprehensive collection of classic public-domain books from Project Gutenberg), specifically processed to generate long-text samples with a focused token length target—using the Llama3.1-8B-Instruct tokenizer for accurate tokenization—tailored for fine-tuning draft models in speculative decoding. The dataset contains 6400 coherent text excerpts centered on a 65536-token (64k) target length, ensuring strict sentence-level integrity through natural boundary truncation and maintaining diversity by leveraging PG-19’s rich literary source.
Dataset Details
Key Features
- Focused Length Target: All samples are processed around a single core target token length of 65536 (64k) tokens. Note that the actual token length of samples does not strictly conform to the target value; a natural fluctuation of several hundred tokens (either above or below the target) may occur, as the priority is to preserve sentence coherence.
- Coherent Truncation: Text excerpts are truncated at natural sentence endings (marked by ".", "!", or "?") rather than arbitrary token positions, effectively preserving semantic integrity, readability, and the logical flow of the original content.
- Large Sample Scale: Contains 6400 independent text samples, providing sufficient data volume for robust long-context model evaluation and experimental validation.
- Language(s) (NLP): English (consistent with the source PG-19 dataset, covering classic English literature across multiple genres).
- Size of the Dataset: 1.08 GB
Dataset Sources
- Homepage: https://huggingface.co/datasets/hcyy/pg19-yarn-6400
- Paper: SpecPV: Improving Self-Speculative Decoding for Long-Context Generation via Partial Verification
- Source Dataset: PG-19 (hosted at https://huggingface.co/datasets/emozilla/pg19/)
Dataset Structure
The dataset adopts a streamlined structure to facilitate efficient model loading and processing, with a single core data field:
- text (String): Coherently truncated text excerpts derived from the PG-19 train split. Each entry is processed to align with the 64k token target while ending at natural sentence boundaries, ensuring the original literary style and semantic logic are retained.
The dataset is primarily provided in parquet format.
Dataset Creation
Curation Rationale
This dataset is exclusively designed for fine-tuning draft models using the YaRN algorithm. It provides 6400 long-context English text samples (centered on 64k tokens) to help draft models learn to understand and process text sequences longer than those in their original training data. Key strengths include coherent truncation (avoiding semantic distortion) and diverse content from PG-19’s public-domain classic literature, ensuring reliable model training.
Source Data
The dataset is directly derived from the train split of the PG-19 dataset (hosted at https://huggingface.co/datasets/emozilla/pg19/), a benchmark dataset of English books published before 1919 from Project Gutenberg. The source data covers a wide range of genres, including fiction, non-fiction, and poetry, with high text quality and no copyright restrictions, making it an ideal foundation for long-context NLP research.
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
- Downloads last month
- 14