Instructions to use Fu01978/TinyLM with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Fu01978/TinyLM with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Fu01978/TinyLM")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("Fu01978/TinyLM", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Fu01978/TinyLM with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Fu01978/TinyLM" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Fu01978/TinyLM", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Fu01978/TinyLM
- SGLang
How to use Fu01978/TinyLM with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Fu01978/TinyLM" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Fu01978/TinyLM", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Fu01978/TinyLM" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Fu01978/TinyLM", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Fu01978/TinyLM with Docker Model Runner:
docker model run hf.co/Fu01978/TinyLM
Update modeling_tinylm.py
Browse files- modeling_tinylm.py +2 -2
modeling_tinylm.py
CHANGED
|
@@ -89,7 +89,7 @@ def load_tinylm(model_dir, device="cpu"):
|
|
| 89 |
return model, tokenizer, config
|
| 90 |
|
| 91 |
|
| 92 |
-
def generate(model, tokenizer, prompt, max_new_tokens=100, temperature=0.
|
| 93 |
MAX_SEQ_LEN = model.pos_emb.num_embeddings
|
| 94 |
model.eval()
|
| 95 |
ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
|
|
@@ -114,4 +114,4 @@ def generate(model, tokenizer, prompt, max_new_tokens=100, temperature=0.8, top_
|
|
| 114 |
if __name__ == "__main__":
|
| 115 |
model, tokenizer, config = load_tinylm("./tinylm")
|
| 116 |
print("Model loaded!")
|
| 117 |
-
print("Use 'module.generate(model, tokenizer, \"Once upon a time
|
|
|
|
| 89 |
return model, tokenizer, config
|
| 90 |
|
| 91 |
|
| 92 |
+
def generate(model, tokenizer, prompt, max_new_tokens=100, temperature=0.1, top_k=25, device="cpu"):
|
| 93 |
MAX_SEQ_LEN = model.pos_emb.num_embeddings
|
| 94 |
model.eval()
|
| 95 |
ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
|
|
|
|
| 114 |
if __name__ == "__main__":
|
| 115 |
model, tokenizer, config = load_tinylm("./tinylm")
|
| 116 |
print("Model loaded!")
|
| 117 |
+
print("Use 'module.generate(model, tokenizer, \"Once upon a time\")' to generate.")
|