caizhi1 commited on
Commit
cde2d32
·
1 Parent(s): cd8575c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +118 -3
README.md CHANGED
@@ -1,3 +1,118 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ base_model:
6
+ - inclusionAI/Ling-mini-base-2.0-20T
7
+ pipeline_tag: text-generation
8
+ library_name: transformers
9
+ tags:
10
+ - moe
11
+ ---
12
+
13
+ # Ring-mini-linear-2.0
14
+
15
+ <p align="center">
16
+ <img src="https://mdn.alipayobjects.com/huamei_qa8qxu/afts/img/A*4QxcQrBlTiAAAAAAQXAAAAgAemJ7AQ/original" width="100"/>
17
+ <p>
18
+ <p align="center">🤗 <a href="https://huggingface.co/inclusionAI">Hugging Face</a>&nbsp&nbsp | &nbsp&nbsp🤖 <a href="https://modelscope.cn/organization/inclusionAI">ModelScope</a></p>
19
+
20
+ ## Introduction
21
+
22
+ We are excited to announce the official open-source release of Ring-mini-linear-2.0!
23
+ Building on the success of our Ling 2.0 series, this model continues to leverage a powerful hybrid architecture of linear and standard attention, perfectly balancing high performance with superior efficiency. By integrating our proven MoE design with optimizations like a 1/32 expert activation ratio and MTP layers, Ring-mini-linear achieves the performance of a massive 8 B dense model while activating only 1.4 B parameters. This model was converted from Ling-mini-base-2.0, further trained on an additional xx B tokens.
24
+ When it comes to benchmarks, Ring-mini-linear-2.0 not only holds its own against standard attention models (like ring-mini-2) but also outperforms other open-source MoE and Dense models in its class on several demanding tasks. Plus, with native support for a 128k long context, it's faster and more precise than ever, especially when handling long-form inputs and outputs.
25
+
26
+ <p align="center">
27
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/68d20104a6f8ea66da0cb447/v3t1CFN2MSZznYFej2Oc6.webp" width="800">
28
+ </p>
29
+
30
+ ## Evaluation
31
+ <p align="center">
32
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/68d20104a6f8ea66da0cb447/_tjjgBEBlankfrWUY0N9i.png" width="1000">
33
+ </p>
34
+
35
+ ## Linear Attention, Highly Sparse,High-Speed Generation
36
+
37
+ Thanks to its hybrid attention mechanism and highly sparse MoE architecture, Ring-mini-linear-2.0 achieves near-linear time complexity and constant space complexity, resulting in outstanding inference efficiency. To fully demonstrate this advantage, we conducted a head-to-head comparison between our model and top-tier competitors of similar size or performance.
38
+ The results are remarkable. In the prefill stage, Ring-mini-linear-2.0's performance is exceptional; when the context length exceeds 256k, its throughput is over 12 times higher than that of Qwen3-8B. Furthermore, in the high-concurrency decode stage, its capabilities are even more pronounced. For generation lengths exceeding 32k, its throughput easily surpasses 12 times that of Qwen3-8B.
39
+
40
+ ## Model Downloads
41
+
42
+ <div align="center">
43
+
44
+ | **Model** | **#Total Params** | **#Activated Params** | **Context Length** | **Download** |
45
+ | :----------------: | :---------------: | :-------------------: | :----------------: | :----------: |
46
+ | Ring-mini-linear-2.0 | 16.8B | 1.4B | 128K | [🤗 HuggingFace](https://huggingface.co/inclusionAI/Ring-mini-linear-2.0) <br>[🤖 Modelscope](https://modelscope.cn/models/inclusionAI/Ring-mini-linear-2.0)|
47
+ </div>
48
+
49
+ ## Quickstart
50
+
51
+ ### Requirements
52
+ 1. pip install flash-linear-attention==0.3.2
53
+ 2. pip install transformers==4.56.1
54
+
55
+ ### 🤗 Hugging Face Transformers
56
+
57
+ ```python
58
+ from transformers import AutoModelForCausalLM, AutoTokenizer
59
+
60
+ model_name = "inclusionAI/Ring-mini-linear-2.0"
61
+
62
+ model = AutoModelForCausalLM.from_pretrained(
63
+ model_name,
64
+ dtype="auto",
65
+ device_map="auto",
66
+ trust_remote_code=True,
67
+ )
68
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
69
+
70
+
71
+ prompts = [
72
+ "Give me a short introduction to large language models."
73
+ ]
74
+ input_texts = []
75
+ for prompt in prompts:
76
+ messages = [
77
+ {"role": "user", "content": prompt}
78
+ ]
79
+ text = tokenizer.apply_chat_template(
80
+ messages,
81
+ tokenize=False,
82
+ add_generation_prompt=True,
83
+ enable_thinking=True
84
+ )
85
+ input_texts.append(text)
86
+
87
+ print(input_texts)
88
+
89
+ model_inputs = tokenizer(input_texts, return_tensors="pt", return_token_type_ids=False, padding=True, padding_side='left').to(model.device)
90
+
91
+ generated_ids = model.generate(
92
+ **model_inputs,
93
+ max_new_tokens=8192,
94
+ do_sample=False,
95
+ )
96
+ generated_ids = [
97
+ output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
98
+ ]
99
+
100
+ responses = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
101
+
102
+ print("*" * 30)
103
+ print(responses)
104
+ print("*" * 30)
105
+ ```
106
+
107
+ ### SGLang
108
+ ```bash
109
+ python -m sglang.launch_server \
110
+ --model-path <model_path> \
111
+ --trust-remote-code \
112
+ --tp-size 1 \
113
+ --disable-radix-cache \
114
+ --json-model-override-args "{\"linear_backend\": \"seg_la\"}"
115
+ ```
116
+ ### vLLM
117
+
118
+ ## Citation