Instructions to use z-lab/gpt-oss-120b-DFlash with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use z-lab/gpt-oss-120b-DFlash with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="z-lab/gpt-oss-120b-DFlash", trust_remote_code=True)# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("z-lab/gpt-oss-120b-DFlash", trust_remote_code=True) model = AutoModel.from_pretrained("z-lab/gpt-oss-120b-DFlash", trust_remote_code=True) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use z-lab/gpt-oss-120b-DFlash with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "z-lab/gpt-oss-120b-DFlash" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "z-lab/gpt-oss-120b-DFlash", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/z-lab/gpt-oss-120b-DFlash
- SGLang
How to use z-lab/gpt-oss-120b-DFlash with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "z-lab/gpt-oss-120b-DFlash" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "z-lab/gpt-oss-120b-DFlash", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "z-lab/gpt-oss-120b-DFlash" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "z-lab/gpt-oss-120b-DFlash", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use z-lab/gpt-oss-120b-DFlash with Docker Model Runner:
docker model run hf.co/z-lab/gpt-oss-120b-DFlash
gpt-oss-120b-DFlash
DFlash is a novel speculative decoding method that utilizes a lightweight block diffusion model for drafting. It enables efficient, high-quality parallel drafting that pushes the limits of inference speed.
This model serves as the drafter component and contains 0.8B parameters. It must be used in conjunction with the target model openai/gpt-oss-120b.
📊 Training Data
gpt-oss-120b-DFlash is trained on 800K samples, drawn from:
For all samples, the response portion was regenerated using the target model openai/gpt-oss-120b.
🚀 Quick Start
SGLang
Installation
uv pip install "git+https://github.com/sgl-project/sglang.git@refs/pull/20547/head#subdirectory=python"
Launch Server
# Optional: enable schedule overlapping (experimental, may not be stable)
# export SGLANG_ENABLE_SPEC_V2=1
# export SGLANG_ENABLE_DFLASH_SPEC_V2=1
# export SGLANG_ENABLE_OVERLAP_PLAN_STREAM=1
python -m sglang.launch_server \
--model-path openai/gpt-oss-120b \
--speculative-algorithm DFLASH \
--speculative-draft-model-path z-lab/gpt-oss-120b-DFlash \
--tp-size 1 \
--dtype bfloat16 \
--attention-backend fa3 \
--mem-fraction-static 0.75 \
--trust-remote-code
Usage
from openai import OpenAI
client = OpenAI(base_url="http://localhost:30000/v1", api_key="EMPTY")
response = client.chat.completions.create(
model="openai/gpt-oss-120b",
messages=[{"role": "user", "content": "Write a quicksort in Python."}],
max_tokens=2048,
temperature=0.0,
)
print(response.choices[0].message.content)
vLLM
Installation
uv pip install vllm
uv pip install -U vllm --torch-backend=auto --extra-index-url https://wheels.vllm.ai/nightly
Launch Server
vllm serve openai/gpt-oss-120b \
--speculative-config '{"method": "dflash", "model": "z-lab/gpt-oss-120b-DFlash", "num_speculative_tokens": 9}' \
--attention-backend flash_attn \
--max-num-batched-tokens 32768
Usage
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8000/v1", api_key="EMPTY")
response = client.chat.completions.create(
model="openai/gpt-oss-120b",
messages=[{"role": "user", "content": "Write a quicksort in Python."}],
max_tokens=2048,
temperature=0.0,
)
print(response.choices[0].message.content)
Evaluation
The draft model is trained with a block size of 10. During evaluation, we use three settings:
- Block size = 4 (3 draft tokens)
- Block size = 6 (5 draft tokens)
- Block size = 10 (9 draft tokens)
All experiments are conducted using SGLang on a single H200 GPU.
The reported speedups are end-to-end speedups, including prefill time. The pure decoding speedup is higher.
For all tasks, the reasoning effort is set to medium. Using low reasoning effort would further increase the acceptance length.
Acceptance Length
| Task | Block Size = 4 | Block Size = 6 | Block Size = 10 |
|---|---|---|---|
| GSM8K | 3.3 | 4.3 | 5.3 |
| Math500 | 3.3 | 4.3 | 5.4 |
| HumanEval | 3.1 | 3.8 | 4.4 |
| MBPP | 3.1 | 3.9 | 4.6 |
| MT-Bench | 2.7 | 3.3 | 3.7 |
Speedup
GSM8K
| Concurrency | Block Size = 4 | Block Size = 10 |
|---|---|---|
| 1 | 1.3× | 1.8× |
| 8 | 1.2× | 1.6× |
| 16 | 1.3× | 1.6× |
| 32 | 1.2× | 1.5× |
| 64 | 1.2× | 1.5× |
Math500
| Concurrency | Block Size = 4 | Block Size = 10 |
|---|---|---|
| 1 | 1.5× | 1.9× |
| 8 | 1.4× | 1.7× |
| 16 | 1.5× | 1.6× |
| 32 | 1.4× | 1.5× |
| 64 | 1.4× | 1.5× |
HumanEval
| Concurrency | Block Size = 4 | Block Size = 10 |
|---|---|---|
| 1 | 1.3× | 1.7× |
| 8 | 1.4× | 1.7× |
| 16 | 1.4× | 1.8× |
| 32 | 1.5× | 1.7× |
| 64 | 1.4× | 1.5× |
MBPP
| Concurrency | Block Size = 4 | Block Size = 10 |
|---|---|---|
| 1 | 1.4× | 1.8× |
| 8 | 1.5× | 1.7× |
| 16 | 1.5× | 1.8× |
| 32 | 1.6× | 1.8× |
| 64 | 1.6× | 1.6× |
MT-Bench
| Concurrency | Block Size = 4 | Block Size = 10 |
|---|---|---|
| 1 | 1.3× | 1.3× |
| 8 | 1.2× | 1.3× |
| 16 | 1.3× | 1.3× |
| 32 | 1.4× | 1.3× |
| 64 | 1.3× | 1.2× |
Acknowledgement
We are grateful to Yotta Labs for their compute support in training this draft model.
Citation
If you find DFlash useful for your research or applications, please cite our project.
@misc{chen2026dflash,
title = {DFlash: Block Diffusion for Flash Speculative Decoding},
author = {Chen, Jian and Liang, Yesheng and Liu, Zhijian},
year = {2026},
eprint = {2602.06036},
archivePrefix = {arXiv},
primaryClass = {cs.CL},
url = {https://arxiv.org/abs/2602.06036}
}
- Downloads last month
- 3,341