fineweb2-hq-tokenizers-v3
Collection
21 items • Updated
A Byte-Level BPE tokenizer trained on ['arb_Arab'] data from Fineweb-2-HQ.
| Parameter | Value |
|---|---|
| Algorithm | Byte-Level BPE |
| Language | ['arb_Arab'] |
| Target Vocab Size | 8,000 |
| Final Vocab Size | 9,020 |
| Pre-tokenizer | custom:arb_Arab |
| Number handling | ltr_3digit |
| Contraction handling | False |
| Normalizer | NFC |
| Special Tokens | <s>, </s>, <pad>, <unk> |
| Training Shards | 2, ['fineweb_2_hq.arb_Arab.chunk.00.jsonl', 'fineweb_2_hq.arb_Arab.chunk.01.jsonl'] |
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("flexitok/bpe_ltr_arb_Arab_8000_v3")
tokens = tokenizer.encode("Hello, world!")
tokenizer.json — Full HuggingFace tokenizervocab.json — Vocabulary mappingmerges.txt — BPE merge rules| Text | Tokens | Token IDs |
|---|---|---|
Hello, world! 12345 This is a test. こんにちは |
H, ell, o, ,, Ġ, w, or, ld, !, Ġ, 123, 45, Ġ, Th, is, Ġ, is, Ġ, a, Ġ |
42, 3954, 81, 14, 223, 89, 731, 4251, 3, 223, 8377, 4174, 223, 4370, 900, 223, 900, 223, 67, 223 |
Command used to create this tokenizer:
['/home/gsa/tokenizers2/flexitok/tokenizer_training/train_tokenizers.py', 'algorithm=bpe', 'vocab_size=8000', 'langs=[arb_Arab]', 'pretok_behavior=isolated', 'data_dir=/scratch/gsa/data/flexitok/', 'output_dir=/scratch/gsa/trained_tokenizers', 'pretokenizer=custom:arb_Arab', 'number_handling=ltr_3digit', 'add_numbers=true', 'handle_contractions=false', 'unicode_normalization=nfc', 'use_byte_level_regex=false', 'byte_fallback=false', 'strip_zero_width=false', 'cjk_char_split=false', 'cjk_char_coverage=0', 'add_cjk_chars=false', 'max_han_run=-1', 'max_lines=500_000', 'hf.publish_to_hf=true', 'hf_repo_prefix=flexitok/', 'hf.hf_repo_id=flexitok/bpe_ltr_arb_Arab_8000_v3', 'hf.collections=[flexitok/fineweb2-hq-tokenizers-v3,flexitok/8000-vocab-v3]']