allenai/OLMoE-mix-0924
Preview • Updated • 2.76k • 55
How to use allenai/Emo_1b14b_130B with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="allenai/Emo_1b14b_130B", trust_remote_code=True)
messages = [
{"role": "user", "content": "Who are you?"},
]
pipe(messages) # Load model directly
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("allenai/Emo_1b14b_130B", trust_remote_code=True, dtype="auto")How to use allenai/Emo_1b14b_130B with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "allenai/Emo_1b14b_130B"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "allenai/Emo_1b14b_130B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker model run hf.co/allenai/Emo_1b14b_130B
How to use allenai/Emo_1b14b_130B with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "allenai/Emo_1b14b_130B" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "allenai/Emo_1b14b_130B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "allenai/Emo_1b14b_130B" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "allenai/Emo_1b14b_130B",
"messages": [
{
"role": "user",
"content": "What is the capital of France?"
}
]
}'How to use allenai/Emo_1b14b_130B with Docker Model Runner:
docker model run hf.co/allenai/Emo_1b14b_130B
A smaller-scale ablation checkpoint of EMO from EMO: Pretraining Mixture of Experts for Emergent Modularity — referred to as EMO at the 130B-token scale in the paper (Table 1 / Figure 11). Not midtrained.
1B-active / 14B-total parameter Mixture-of-Experts model (128 experts: 127 routed + 1 shared, k=8 active per token) pretrained on 130B tokens of the OLMoE pretraining mix with the EMO document-level expert pool constraint. Used in the paper's memory-matched ablation suite.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "allenai/Emo_1b14b_130B"
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
inputs = tokenizer(["Language modeling is "], return_tensors="pt", return_token_type_ids=False)
out = model.generate(**inputs, max_new_tokens=100, do_sample=True, temperature=1.0, top_p=0.7)
print(tokenizer.batch_decode(out, skip_special_tokens=True)[0])
@article{wang2026emo,
title = {EMO: Pretraining Mixture of Experts for Emergent Modularity},
author = {Wang, Ryan and Bhagia, Akshita and Min, Sewon},
year = {2026},
url = {https://arxiv.org/abs/2605.06663}
}