fix vllm deployment
#16
by
Mingke977 - opened
- docs/deploy_guidance.md +1 -1
docs/deploy_guidance.md
CHANGED
|
@@ -11,7 +11,7 @@ Here is the example to serve this model on a H200 single node via vLLM:
|
|
| 11 |
|
| 12 |
1. pull the Docker image.
|
| 13 |
```bash
|
| 14 |
-
docker pull jdopensource/joyai-llm-vllm:v0.
|
| 15 |
```
|
| 16 |
2. launch JoyAI-LLM Flash model with dense MTP.
|
| 17 |
```bash
|
|
|
|
| 11 |
|
| 12 |
1. pull the Docker image.
|
| 13 |
```bash
|
| 14 |
+
docker pull jdopensource/joyai-llm-vllm:v0.15.1-joyai_llm_flash
|
| 15 |
```
|
| 16 |
2. launch JoyAI-LLM Flash model with dense MTP.
|
| 17 |
```bash
|