Instructions to use gaotang/RM-R1-Qwen2.5-Instruct-14B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use gaotang/RM-R1-Qwen2.5-Instruct-14B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="gaotang/RM-R1-Qwen2.5-Instruct-14B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("gaotang/RM-R1-Qwen2.5-Instruct-14B") model = AutoModelForCausalLM.from_pretrained("gaotang/RM-R1-Qwen2.5-Instruct-14B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use gaotang/RM-R1-Qwen2.5-Instruct-14B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "gaotang/RM-R1-Qwen2.5-Instruct-14B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "gaotang/RM-R1-Qwen2.5-Instruct-14B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/gaotang/RM-R1-Qwen2.5-Instruct-14B
- SGLang
How to use gaotang/RM-R1-Qwen2.5-Instruct-14B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "gaotang/RM-R1-Qwen2.5-Instruct-14B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "gaotang/RM-R1-Qwen2.5-Instruct-14B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "gaotang/RM-R1-Qwen2.5-Instruct-14B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "gaotang/RM-R1-Qwen2.5-Instruct-14B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use gaotang/RM-R1-Qwen2.5-Instruct-14B with Docker Model Runner:
docker model run hf.co/gaotang/RM-R1-Qwen2.5-Instruct-14B
Add pipeline tag and library name, add usage example and missing sections
This PR improves the model card's metadata by adding the pipeline_tag and library_name. This improves discoverability on the Hugging Face Hub, particularly for users searching for text ranking models and those using the Transformers library. Also, a code snippet demonstrating basic usage has been added, and the sections "Training", "Evaluation", "Use Our Model", "Build Your Own Dataset", "Features", "Acknowledgement" and "Citations" from the Github README were added to the model card.
Thank you very much! This model card will be continuously updated.