Image-Text-to-Text
Transformers
Safetensors
gemma3
conversational
Eval Results
text-generation-inference
Instructions to use google/gemma-3-27b-it with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use google/gemma-3-27b-it with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="google/gemma-3-27b-it") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoProcessor, AutoModelForImageTextToText processor = AutoProcessor.from_pretrained("google/gemma-3-27b-it") model = AutoModelForImageTextToText.from_pretrained("google/gemma-3-27b-it") messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] inputs = processor.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- HuggingChat
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use google/gemma-3-27b-it with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "google/gemma-3-27b-it" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "google/gemma-3-27b-it", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/google/gemma-3-27b-it
- SGLang
How to use google/gemma-3-27b-it with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "google/gemma-3-27b-it" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "google/gemma-3-27b-it", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "google/gemma-3-27b-it" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "google/gemma-3-27b-it", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use google/gemma-3-27b-it with Docker Model Runner:
docker model run hf.co/google/gemma-3-27b-it
Blank String in System Prompt causes Error - chat_prompt needs some fixes!
#41
by calycekr - opened
It looks like the chat_prompt needs some fixes.
gemma-3 on vllm v0.8.2
Request
{
"model": "gemma-3-27b-it",
"messages": [
{
"role": "system",
"content": ""
},
{
"role": "user",
"content": "Who are you?"
}
],
"stream": false,
"max_tokens": 100
}
Response
{
"object": "error",
"message": "list object has no element 0",
"type": "BadRequestError",
"param": null,
"code": 400
}
Removing the system prompt altogether will make it work.
Request
{
"model": "gemma-3-27b-it",
"messages": [
{
"role": "user",
"content": "Who are you?"
}
],
"stream": false,
"max_tokens": 100
}
Response
{
"id": "chatcmpl-560ed12f1de94b45ad4933419586e161",
"object": "chat.completion",
"created": 1743042717,
"model": "test",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"reasoning_content": null,
"content": "Hi there! I’m Gemma, a large language model created by the Gemma team at Google DeepMind. I’m an open-weights model, which means I’m publicly available for use! \n\nI’m designed to take text and images as input and produce text as output. \n\nHow can I help you today?",
"tool_calls": []
},
"logprobs": null,
"finish_reason": "stop",
"stop_reason": 106
}
],
"usage": {
"prompt_tokens": 13,
"total_tokens": 83,
"completion_tokens": 70,
"prompt_tokens_details": null
},
"prompt_logprobs": null
}
calycekr changed discussion title from Blank String in System Prompt causes Error to Blank String in System Prompt causes Error - chat_prompt needs some fixes!
This will fix it for now. I think there's a better way, though. Please reflect this in the repo.
{
"chat_template": "{{- bos_token -}}\n{%- if messages[0]['role'] == 'system' %}\n {%- if messages[0]['content'] %}\n {%- if messages[0]['content'] is string %}\n {%- set first_user_prefix = messages[0]['content'] + '\n\n' %}\n {%- else %}\n {%- set first_user_prefix = messages[0]['content'][0]['text'] + '\n\n' %}\n {%- endif %}\n {%- set loop_messages = messages[1:] %}\n {%- else %}\n {%- set first_user_prefix = '' %}\n {%- set loop_messages = messages[1:] %}\n {%- endif %}\n{%- else %}\n {%- set first_user_prefix = '' %}\n {%- set loop_messages = messages %}\n{%- endif %}\n{%- for message in loop_messages %}\n {%- if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}\n {{- raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') -}}\n {%- endif %}\n {%- if (message['role'] == 'assistant') %}\n {%- set role = 'model' %}\n {%- else %}\n {%- set role = message['role'] %}\n {%- endif %}\n {{- '<start_of_turn>' + role + '\n' + (first_user_prefix if loop.first else '') -}}\n {%- if message['content'] is string %}\n {{- message['content'] | trim -}}\n {%- elif message['content'] is iterable %}\n {%- for item in message['content'] %}\n {%- if item['type'] == 'image' %}\n {{- '<start_of_image>' -}}\n {%- elif item['type'] == 'text' %}\n {{- item['text'] | trim -}}\n {%- endif %}\n {%- endfor %}\n {%- else %}\n {{- raise_exception('Invalid content type') -}}\n {%- endif %}\n {{- '<end_of_turn>\n' -}}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<start_of_turn>model\n' -}}\n{%- endif %}"
}
Hi @calycekr ,
Apologies for the late reply, welcome to Google Gemma family of open source model. Could you please confirm whether the issue is resolved or not by above comment. If you required any further assistance please let me know I'm more than happy to help you out.
Thanks.