qiaojin/PubMedQA
Viewer • Updated • 274k • 31.8k • 320
How to use Reverb/MedLLaMA-3 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="Reverb/MedLLaMA-3") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Reverb/MedLLaMA-3")
model = AutoModelForCausalLM.from_pretrained("Reverb/MedLLaMA-3")How to use Reverb/MedLLaMA-3 with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "Reverb/MedLLaMA-3"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Reverb/MedLLaMA-3",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/Reverb/MedLLaMA-3
How to use Reverb/MedLLaMA-3 with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "Reverb/MedLLaMA-3" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Reverb/MedLLaMA-3",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "Reverb/MedLLaMA-3" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "Reverb/MedLLaMA-3",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use Reverb/MedLLaMA-3 with Docker Model Runner:
docker model run hf.co/Reverb/MedLLaMA-3
This model is developed by Basel Anaya.
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Reverb/MedLLaMA-3"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
| Tasks | Version | Filter | n-shot | Metric | Value | Stderr | |
|---|---|---|---|---|---|---|---|
| stem | N/A | none | 0 | acc | 0.6466 | ± | 0.0056 |
| none | 0 | acc_norm | 0.6124 | ± | 0.0066 | ||
| - medmcqa | Yaml | none | 0 | acc | 0.6118 | ± | 0.0075 |
| none | 0 | acc_norm | 0.6118 | ± | 0.0075 | ||
| - medqa_4options | Yaml | none | 0 | acc | 0.6143 | ± | 0.0136 |
| none | 0 | acc_norm | 0.6143 | ± | 0.0136 | ||
| - anatomy (mmlu) | 0 | none | 0 | acc | 0.7185 | ± | 0.0389 |
| - clinical_knowledge (mmlu) | 0 | none | 0 | acc | 0.7811 | ± | 0.0254 |
| - college_biology (mmlu) | 0 | none | 0 | acc | 0.8264 | ± | 0.0317 |
| - college_medicine (mmlu) | 0 | none | 0 | acc | 0.7110 | ± | 0.0346 |
| - medical_genetics (mmlu) | 0 | none | 0 | acc | 0.8300 | ± | 0.0378 |
| - professional_medicine (mmlu) | 0 | none | 0 | acc | 0.7868 | ± | 0.0249 |
| - pubmedqa | 1 | none | 0 | acc | 0.7420 | ± | 0.0196 |
| Groups | Version | Filter | n-shot | Metric | Value | Stderr | |
|---|---|---|---|---|---|---|---|
| stem | N/A | none | 0 | acc | 0.6466 | ± | 0.0056 |
| none | 0 | acc_norm | 0.6124 | ± | 0.0066 |