EuroLLM
Collection
14 items • Updated • 45
How to use utter-project/EuroLLM-9B-2512 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="utter-project/EuroLLM-9B-2512") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("utter-project/EuroLLM-9B-2512")
model = AutoModelForCausalLM.from_pretrained("utter-project/EuroLLM-9B-2512")How to use utter-project/EuroLLM-9B-2512 with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "utter-project/EuroLLM-9B-2512"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "utter-project/EuroLLM-9B-2512",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/utter-project/EuroLLM-9B-2512
How to use utter-project/EuroLLM-9B-2512 with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "utter-project/EuroLLM-9B-2512" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "utter-project/EuroLLM-9B-2512",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "utter-project/EuroLLM-9B-2512" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "utter-project/EuroLLM-9B-2512",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use utter-project/EuroLLM-9B-2512 with Docker Model Runner:
docker model run hf.co/utter-project/EuroLLM-9B-2512
This is the model card for EuroLLM-9B-2512, an improved version of utter-project/EuroLLM-9B. In comparison with the previous version, this version includes the long-context extension phase from utter-project/EuroLLM-22B.
This model has not been aligned to human preferences, so the model may generate problematic outputs (e.g., hallucinations, harmful content, or false statements).
If you use our work, please cite:
@misc{ramos2026eurollm22btechnicalreport,
title={EuroLLM-22B: Technical Report},
author={Miguel Moura Ramos and Duarte M. Alves and Hippolyte Gisserot-Boukhlef and João Alves and Pedro Henrique Martins and Patrick Fernandes and José Pombal and Nuno M. Guerreiro and Ricardo Rei and Nicolas Boizard and Amin Farajian and Mateusz Klimaszewski and José G. C. de Souza and Barry Haddow and François Yvon and Pierre Colombo and Alexandra Birch and André F. T. Martins},
year={2026},
eprint={2602.05879},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2602.05879},
}