Instructions to use Lucebox/Laguna-XS.2-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use Lucebox/Laguna-XS.2-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="Lucebox/Laguna-XS.2-GGUF", filename="laguna-xs2-Q4_K_M.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use Lucebox/Laguna-XS.2-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Lucebox/Laguna-XS.2-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf Lucebox/Laguna-XS.2-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf Lucebox/Laguna-XS.2-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf Lucebox/Laguna-XS.2-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf Lucebox/Laguna-XS.2-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf Lucebox/Laguna-XS.2-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf Lucebox/Laguna-XS.2-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf Lucebox/Laguna-XS.2-GGUF:Q4_K_M
Use Docker
docker model run hf.co/Lucebox/Laguna-XS.2-GGUF:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use Lucebox/Laguna-XS.2-GGUF with Ollama:
ollama run hf.co/Lucebox/Laguna-XS.2-GGUF:Q4_K_M
- Unsloth Studio new
How to use Lucebox/Laguna-XS.2-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Lucebox/Laguna-XS.2-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Lucebox/Laguna-XS.2-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Lucebox/Laguna-XS.2-GGUF to start chatting
- Docker Model Runner
How to use Lucebox/Laguna-XS.2-GGUF with Docker Model Runner:
docker model run hf.co/Lucebox/Laguna-XS.2-GGUF:Q4_K_M
- Lemonade
How to use Lucebox/Laguna-XS.2-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull Lucebox/Laguna-XS.2-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.Laguna-XS.2-GGUF-Q4_K_M
List all available models
lemonade list
Laguna-XS.2 GGUF (BF16 + Q4_K_M)
GGUF conversions of poolside/Laguna-XS.2, a 33B-A3B (3B active) MoE coding model from Poolside under Apache 2.0. Built for use with lucebox-hub (dflash + PFlash) on consumer GPUs.
Files
| File | Quant | Size | BPW | Notes |
|---|---|---|---|---|
laguna-xs2-bf16.gguf |
BF16 | 66.9 GB | 16.01 | reference, identical math to HF transformers fp/bf16 |
laguna-xs2-Q4_K_M.gguf |
Q4_K_M | 20.3 GB | 4.85 | imatrix-calibrated, fits a single 24 GB GPU |
laguna-xs2.imatrix |
imatrix | 188 MB | β | Bartowski calibration_datav3 (134 chunks, 68608 tokens) |
Architecture
- 40 layers, n_embd 2048, n_head_kv 8, head_dim 128
- Per-layer head count [48, 64, 64, 64] Γ 10 (4-layer SWA pattern: full, sw, sw, sw)
- 256 experts, top-8 routing, 1 always-on shared expert
- Sigmoid router, expert weights scale 2.5
- Sliding window 512, partial RoPE with YaRN (orig ctx 4096, factor 32)
- Vocab 100,352, BOS=2, EOS=2, PAD=9
Quality
| Metric | BF16 | Q4_K_M | Ξ |
|---|---|---|---|
| Perplexity (Bartowski v3, 20Γ512) | 10.7594 Β± 0.522 | 11.2854 Β± 0.553 | +4.9% |
Imatrix calibration uses Bartowski calibration_datav3.txt (multilingual + code mix), the same corpus Unsloth-distributed quants use.
Verified vs the official Poolside HF reference (BF16, eager attention, greedy decoding): logits match exactly for the first 30+ tokens on a B-tree explanation prompt; subsequent divergence is fp precision drift, not a graph bug.
Performance (RTX 3090 24 GB, Q4_K_M)
Measured with bench_laguna_generate from lucebox-hub (dflash autoregressive forward, no spec-decode draft yet):
| Workload | Throughput | Notes |
|---|---|---|
| Decode @ ctx=128 (greedy) | 113 tok/s | n_gen=128 |
| Decode @ ctx=1K | 104 tok/s | |
| Decode @ ctx=4K | 65 tok/s | |
| 128K TTFT via dflash + PFlash | 15.91 s | 5.4Γ faster than llama.cpp pp131072 (86.60 s) |
| Loader VRAM | 18.77 GiB | + 110 MiB tok_embd kept on CPU |
Usage
lucebox-hub (dflash + PFlash, recommended for 128K)
# clone
git clone https://github.com/Luce-Org/lucebox-hub
cd lucebox-hub/dflash
# build with sm_86 (3090 / A6000)
cmake -B build -DCMAKE_CUDA_ARCHITECTURES=86
cmake --build build -j
# fetch the Q4_K_M GGUF + Poolside tokenizer
hf download Lucebox/Laguna-XS.2-GGUF laguna-xs2-Q4_K_M.gguf --local-dir models/
hf download poolside/Laguna-XS.2 chat_template.jinja tokenizer.json tokenizer_config.json \
special_tokens_map.json config.json --local-dir models/Laguna-XS-2
# run the OpenAI server (same server.py as qwen35, arch auto-detected from GGUF).
# -ctk/-ctv q4_0 keeps the 131K KV cache under ~6 GB so weights + KV fit on 24 GB.
python3 scripts/server.py \
--target models/laguna-xs2-Q4_K_M.gguf \
--tokenizer models/Laguna-XS-2 \
--port 8000 --max-ctx 131072 \
-ctk q4_0 -ctv q4_0
# chat
curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model":"luce-dflash","messages":[{"role":"user","content":"hello"}],"stream":true}'
License
Apache 2.0, inherited from upstream poolside/Laguna-XS.2.
See also
- Downloads last month
- 1,399
4-bit
16-bit
Model tree for Lucebox/Laguna-XS.2-GGUF
Base model
poolside/Laguna-XS.2