diffuse-cpp is now Apache-2.0 — Dream-7B excels at math and code on CPU

#1
by Carmenest - opened
diffuse-cpp org

We just open-sourced diffuse-cpp under Apache-2.0!

diffuse-cpp is the first C++ inference engine for Diffusion Language Models, built on GGML.

Dream-7B Benchmarks (AMD EPYC 12-Core, Q4_K_M, entropy_exit + cache)

Prompt Dream-7B llama.cpp Speedup
Capital of France? 21.6 tok/s (2 steps) 8.51 tok/s 2.5x
15 x 23 = ? 21.6 tok/s (2 steps) 8.51 tok/s 2.5x
Translate to French 14.3 tok/s (6 steps) 8.51 tok/s 1.7x
Python is_prime() 8.2 tok/s (7 steps) 8.51 tok/s 1.0x

Dream correctly solves 15 x 23 = 345 in just 2 denoising steps at 21.6 tok/s.

Why diffusion on CPU?

Autoregressive models are memory-bound (one token = one full weight read). Diffusion models generate all tokens in parallel, making them compute-bound. Thread scaling: 7.4x at 12 cores vs 2.4x for AR.

Dream vs LLaDA

Strength Dream-7B LLaDA-8B
Math 21.6 tok/s (2 steps) 6.0 tok/s (16 steps)
Code 8.2 tok/s (7 steps) 4.5 tok/s (15 steps)
Translation 13-14 tok/s 23-28 tok/s

Use Dream for math, code, factual. Use LLaDA for translation.

Quick Start

huggingface-cli download diffuse-cpp/Dream-v0-Instruct-7B-GGUF dream-7b-q4km.gguf

git clone --recursive https://github.com/iafiscal1212/diffuse-cpp.git
cd diffuse-cpp && cmake -B build -DCMAKE_BUILD_TYPE=Release && cmake --build build -j$(nproc)

./build/diffuse-cli -m dream-7b-q4km.gguf --tokens "151644,8948,198,2610,525,264,10950,17847,13,151645,198,151644,872,198,3838,374,220,868,1303,220,1419,30,151645,198,151644,77091,198" -n 64 -s 16 -t 12 --remasking entropy_exit

Links

Contributions welcome!

Sign up or log in to comment