Less Context Length than Expected(600k)

#57
by Forcewithme - opened

Deploying with lmsysorg/sglang:glm5-hopper on 8*H20-3e(141G), with the official command:

python3 -m sglang.launch_server \
  --model-path zai-org/GLM-5-FP8 \
  --tp-size 8 \
  --tool-call-parser glm47  \
  --reasoning-parser glm45 \
  --speculative-algorithm EAGLE \
  --speculative-num-steps 3 \
  --speculative-eagle-topk 1 \
  --speculative-num-draft-tokens 4 \
  --mem-fraction-static 0.85 \
  --served-model-name glm-5-fp8

I found that as as soon as the prefill token reach 600k, the servers return empty content, indicating the context length exceeds the limit. But it shouldn't.

On the same machine, With sglang0.5.9, qwen3.5-397b, kimi-k2.5, minimax-m2.5 can all reach the max context length ,which are 196k and 256k.


用官方给的镜像和部署命令部署服务,发现最多只支持到600k的上下文,一旦达到600k,response里content字段就为空。根据先前的经验,这通常提示显存不够。但是我们用相同的设备部署了qwen3.5b、kimi-k2.5和minimax-m2.5,发现都能支持到最长的上下文。其中kimi-k2.5是1T的模型。

所以不太理解为什么本地部署的glm5只支持到600k,希望官方或者社区大佬给出答案~

And I found a new image on docker hub: docker pull lmsysorg/sglang:glm5-hopper-patched. What is this image used for ?

GLM-5 Only support 200K context

ZHANGYUXUAN-zR changed discussion status to closed

GLM-5 Only support 200K context

It's a typo, it can only supports 60k, not 600k.


打错字了,我实测是发现最多只支持到60k。

In what scenario are you testing this? Is the content field empty in the response because everything is in reasoning_content? You can print/log whether the model is actually outputting something but the parser isn't reading it.

ZHANGYUXUAN-zR changed discussion status to open

Random Input, Here is my test script:

import argparse
import sys
from openai import OpenAI
from transformers import AutoTokenizer, PreTrainedTokenizerFast

MODEL_NAME = "GLM5/"

ZOO_PATH = "/data/llm_zoo"
TOKEN_PATH = f"{ZOO_PATH}/{MODEL_NAME}"

def get_tokenizer():
    try:
        return AutoTokenizer.from_pretrained(TOKEN_PATH, trust_remote_code=True)
    except Exception as e:
        print(f"AutoTokenizer failed: {e}. Trying PreTrainedTokenizerFast...")
        try:
            return PreTrainedTokenizerFast.from_pretrained(TOKEN_PATH)
        except Exception as e2:
            print(f"Error loading tokenizer from '{TOKEN_PATH}'.")
            print("Please ensure you have updated the TOKEN_PATH variable in the script with the correct path.")
            print(f"Details: {e}")
            print(f"Fallback Details: {e2}")
            sys.exit(1)

def generate_prompt(tokenizer, target_token_count):
    """
    Generate a prompt with approximately the target number of tokens.
    We use a simple repeated token strategy.
    """
    # Find a simple token to repeat (e.g., token for "test" or "a")
    # Using a simple common token avoids complex merging issues usually
    sample_text = "test"
    sample_ids = tokenizer.encode(sample_text, add_special_tokens=False)
    if not sample_ids:
        token_id = 1 # Fallback
    else:
        token_id = sample_ids[0]

    # Create a list of token IDs
    input_ids = [token_id] * target_token_count

    # Decode to text so we can send it via API
    prompt = tokenizer.decode(input_ids)

    return prompt

def main():
    parser = argparse.ArgumentParser(description="Test max input token support for GLM5 API")
    parser.add_argument("--start_token", type=int, required=True, help="Starting token count")
    parser.add_argument("--add_token", type=int, required=True, help="Token increment step")
    parser.add_argument("--tokenizer", type=str, default=TOKEN_PATH, help="Path of tokenizer")

    args = parser.parse_args()

    print(f"Starting test with start_token={args.start_token}, add_token={args.add_token}")

    # Load tokenizer
    tokenizer = get_tokenizer()
    print(f"Tokenizer loaded successfully from {TOKEN_PATH}")

    # Initialize OpenAI client
    # No API key validation as requested
    client = OpenAI(
        api_key="none", 
        base_url="http://dummy_model_url/v1"
    )

    current_count = args.start_token

    while True:
        print(f"\nTesting input length: {current_count} tokens...")

        prompt = generate_prompt(tokenizer, current_count)

        try:
            # Call the API
            # model_name is not validated, using "default"
            response = client.chat.completions.create(
                model="default",
                messages=[
                    {"role": "user", "content": prompt}
                ],
                max_tokens=10, # Minimal output tokens needed
                stream=False
            )
            print(response.choices[0].message.reasoning_content, response.choices[0].message.content)
            # Check for "empty result" logic
            # If the server returns valid JSON but empty choices list
            if not response.choices[0].message.content and not response.choices[0].message.reasoning_content:
                print(f"[LIMIT REACHED] Empty result (no choices) received at {current_count} tokens.")
                break

            # If we get here, the request was successful
            usage_info = ""
            if hasattr(response, 'usage') and response.usage:
                usage_info = f"(API reported prompt_tokens: {response.usage.prompt_tokens})"

            print(f"Success. {usage_info}")

            # Increment and continue
            current_count += args.add_token

        except Exception as e:
            # If the SDK throws an error (e.g. connection error, or parsing error from empty body)
            print(f"[STOPPED] Exception occurred at {current_count} tokens.")
            print(f"Error message: {e}")
            break

if __name__ == "__main__":
    main()

Here are the logs:

(base) ➜  context python test_max_tokens.py --start_token 60000 --add_token 5000 
Starting test with start_token=60000, add_token=5000
Tokenizer loaded successfully from /data/llm_zoo/Qwen3.5-397B-A17B/

Testing input length: 60000 tokens...
testcss 对test None
Success. (API reported prompt_tokens: 60005)

Testing input length: 65000 tokens...
newr:// None
Success. (API reported prompt_tokens: 65005)

Testing input length: 70000 tokens...
1.0, if && None
Success. (API reported prompt_tokens: 70005)

Testing input length: 75000 tokens...
None None
[LIMIT REACHED] Empty result (no choices) received at 75000 tokens.
  1. The empty context length is not stable. In my test, the shortest empty length is 60k.
  2. Both content and reasoning_content are empty.
  3. It's not caused by random input. This case is reported by the software engineer in my company, I am in charge of model deployment. And I reproduce this case with random input.

  1. 上面是测例和终端打印的日志。
  2. 输出空是指content和reasoning_content都空。
  3. 导致输出空的输入context长度不稳定,我测出来最短的长度是60k,最长是120k左右,也比理论上的最长上下文短。
  4. 随机输入应该不是导致输出空的原因,因为这个现象是业务开发的同事反馈给我的,我负责模型部署。

Sign up or log in to comment