Instructions to use sengi/dUltra-math-b128 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use sengi/dUltra-math-b128 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="sengi/dUltra-math-b128", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import LLaDOUModelLM model = LLaDOUModelLM.from_pretrained("sengi/dUltra-math-b128", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use sengi/dUltra-math-b128 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "sengi/dUltra-math-b128" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "sengi/dUltra-math-b128", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/sengi/dUltra-math-b128
- SGLang
How to use sengi/dUltra-math-b128 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "sengi/dUltra-math-b128" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "sengi/dUltra-math-b128", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "sengi/dUltra-math-b128" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "sengi/dUltra-math-b128", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use sengi/dUltra-math-b128 with Docker Model Runner:
docker model run hf.co/sengi/dUltra-math-b128
# Load model directly
from transformers import LLaDOUModelLM
model = LLaDOUModelLM.from_pretrained("sengi/dUltra-math-b128", trust_remote_code=True, dtype="auto")dUltra: Ultra-Fast Diffusion Language Models via Reinforcement Learning
dUltra is an on-policy reinforcement learning framework based on Group Relative Policy Optimization (GRPO) that learns unmasking strategies for efficient parallel decoding in masked diffusion language models (MDLMs). By jointly optimizing the base diffusion LLM and an unmasking order planner, dUltra achieves superior accuracy-efficiency trade-offs on mathematical reasoning and code generation tasks.
- Paper: dUltra: Ultra-Fast Diffusion Language Models via Reinforcement Learning
- GitHub Repository: chinsengi/dUltra-os
Usage
To use this model, you can load it through the transformers library. Note that it requires trust_remote_code=True to load the custom model architecture.
from model.llada.lladou import LLaDOUModelLM
from transformers import AutoTokenizer
import torch
model = LLaDOUModelLM.from_pretrained(
"sengi/dUltra-math",
trust_remote_code=True,
torch_dtype=torch.bfloat16,
)
tokenizer = AutoTokenizer.from_pretrained("sengi/dUltra-math")
Citation
@misc{chen2025dultraultrafastdiffusionlanguage,
title={dUltra: Ultra-Fast Diffusion Language Models via Reinforcement Learning},
author={Shirui Chen and Jiantao Jiao and Lillian J. Ratliff and Banghua Zhu},
year={2025},
eprint={2512.21446},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2512.21446},
}
- Downloads last month
- 9
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="sengi/dUltra-math-b128", trust_remote_code=True) messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)