Llama 3.2
Collection
3 items • Updated
How to use Infinirc/Llama-3.2-Infinirc-11B-Vision-Instruct with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("image-text-to-text", model="Infinirc/Llama-3.2-Infinirc-11B-Vision-Instruct")
messages = [
{
"role": "user",
"content": [
{"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
{"type": "text", "text": "What animal is on the candy?"}
]
},
]
pipe(text=messages) # Load model directly
from transformers import AutoProcessor, AutoModelForImageTextToText
processor = AutoProcessor.from_pretrained("Infinirc/Llama-3.2-Infinirc-11B-Vision-Instruct")
model = AutoModelForImageTextToText.from_pretrained("Infinirc/Llama-3.2-Infinirc-11B-Vision-Instruct")
messages = [
{
"role": "user",
"content": [
{"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"},
{"type": "text", "text": "What animal is on the candy?"}
]
},
]
inputs = processor.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(processor.decode(outputs[0][inputs["input_ids"].shape[-1]:]))How to use Infinirc/Llama-3.2-Infinirc-11B-Vision-Instruct with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "Infinirc/Llama-3.2-Infinirc-11B-Vision-Instruct"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Infinirc/Llama-3.2-Infinirc-11B-Vision-Instruct",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this image in one sentence."
},
{
"type": "image_url",
"image_url": {
"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
}
}
]
}
]
}'docker model run hf.co/Infinirc/Llama-3.2-Infinirc-11B-Vision-Instruct
How to use Infinirc/Llama-3.2-Infinirc-11B-Vision-Instruct with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "Infinirc/Llama-3.2-Infinirc-11B-Vision-Instruct" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Infinirc/Llama-3.2-Infinirc-11B-Vision-Instruct",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this image in one sentence."
},
{
"type": "image_url",
"image_url": {
"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
}
}
]
}
]
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "Infinirc/Llama-3.2-Infinirc-11B-Vision-Instruct" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/chat/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Infinirc/Llama-3.2-Infinirc-11B-Vision-Instruct",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this image in one sentence."
},
{
"type": "image_url",
"image_url": {
"url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
}
}
]
}
]
}'How to use Infinirc/Llama-3.2-Infinirc-11B-Vision-Instruct with Docker Model Runner:
docker model run hf.co/Infinirc/Llama-3.2-Infinirc-11B-Vision-Instruct
開發者:陳昭儒Infinirc.com
模型版本:1.0
訓練數據:采用與台灣文化相關的資料集,包括、對話、台灣新聞、文學作品、網路文章、程式、醫療問題、英文對話等。
Llama-3.2-Infinirc-11B-Vision-Instruct模型是專門為了更好地理解和生成與台灣文化相關的文本而設計和微調的。目標是提供一個能夠捕捉台灣特有文化元素和語言習慣的強大語言模型,適用於文本生成、自動回答等多種應用。
基礎模型:meta-llama/Llama-3.2-11B-Vision-Instruct
https://github.com/Infinirc/llama-vision-gradio-webui
| Tasks | Version | Filter | n-shot | Metric | Value | Stderr | ||
|---|---|---|---|---|---|---|---|---|
| arc_easy | 1 | none | 0 | acc | ↑ | 0.7656 | ± | 0.0087 |
| none | 0 | acc_norm | ↑ | 0.7151 | ± | 0.0093 | ||
| hellaswag | 1 | none | 0 | acc | ↑ | 0.5689 | ± | 0.0049 |
| none | 0 | acc_norm | ↑ | 0.7617 | ± | 0.0043 | ||
| piqa | 1 | none | 0 | acc | ↑ | 0.7742 | ± | 0.0098 |
| none | 0 | acc_norm | ↑ | 0.7748 | ± | 0.0097 | ||
| winogrande | 1 | none | 0 | acc | ↑ | 0.7001 | ± | 0.0129 |
請遵守許可證限制。
使用本模型時應注意確保生成的內容不包含歧視性或有害信息。模型的開發和使用應遵循倫理準則和社會責任。
如有任何問題或需要進一步的信息,請透過下方聯絡方式與我們團隊聯繫:
Email: ricky@infinirc.com
網站: https://infinirc.com
docker model run hf.co/Infinirc/Llama-3.2-Infinirc-11B-Vision-Instruct