Update README.md
Browse files
README.md
CHANGED
|
@@ -19,10 +19,9 @@ tags:
|
|
| 19 |
|
| 20 |
## Introduction
|
| 21 |
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
When it comes to benchmarks, Ring-mini-linear-2.0 not only holds its own against standard attention models (like ring-mini-2) but also outperforms other open-source MoE and Dense models in its class on several demanding tasks. Plus, with native support for a 128k long context, it's faster and more precise than ever, especially when handling long-form inputs and outputs.
|
| 26 |
|
| 27 |
<div style="display: flex; justify-content: center;">
|
| 28 |
<div style="text-align: center;">
|
|
@@ -32,23 +31,16 @@ When it comes to benchmarks, Ring-mini-linear-2.0 not only holds its own against
|
|
| 32 |
</div>
|
| 33 |
|
| 34 |
## Evaluation
|
| 35 |
-
To properly evaluate the model's reasoning capabilities, we compared it against 3 other models—Ring-mini-2.0, Qwen3-8B-thinking, and GPT-OSS-20B-Medium—on 6 challenging reasoning benchmarks spanning mathematics, coding, and science. The results demonstrate that the performance of the hybrid linear architecture is by no means inferior to that of standard softmax attention; in fact, it even outperforms the other models on 3 of the benchmarks.
|
| 36 |
-
<div style="display: flex; justify-content: center;">
|
| 37 |
-
<div style="text-align: center;">
|
| 38 |
-
<img src="https://cdn-uploads.huggingface.co/production/uploads/68d20104a6f8ea66da0cb447/_tjjgBEBlankfrWUY0N9i.png" width="800">
|
| 39 |
-
<p style="margin-top: 8px; font-size: 14px;"><strong>Figure 2:</strong> Model Performance Comparison </p>
|
| 40 |
-
</div>
|
| 41 |
-
</div>
|
| 42 |
|
| 43 |
-
|
|
|
|
| 44 |
<div style="display: flex; justify-content: center;">
|
| 45 |
<div style="text-align: center;">
|
| 46 |
-
<img src="https://mdn.alipayobjects.com/huamei_jcuiuk/afts/img/
|
| 47 |
-
<p style="margin-top: 8px; font-size: 14px;"><strong>Figure
|
| 48 |
</div>
|
| 49 |
</div>
|
| 50 |
|
| 51 |
-
|
| 52 |
## Linear Attention, Highly Sparse,High-Speed Generation
|
| 53 |
|
| 54 |
Thanks to its hybrid attention mechanism and highly sparse MoE architecture, Ring-mini-linear-2.0 achieves near-linear time complexity and constant space complexity, resulting in outstanding inference efficiency. To fully demonstrate this advantage, we conducted a head-to-head comparison between our model and top-tier competitors of similar size or performance.
|
|
@@ -140,8 +132,26 @@ print(responses)
|
|
| 140 |
print("*" * 30)
|
| 141 |
```
|
| 142 |
|
| 143 |
-
### SGLang
|
| 144 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 145 |
python -m sglang.launch_server \
|
| 146 |
--model-path <model_path> \
|
| 147 |
--trust-remote-code \
|
|
@@ -149,6 +159,17 @@ python -m sglang.launch_server \
|
|
| 149 |
--disable-radix-cache \
|
| 150 |
--json-model-override-args "{\"linear_backend\": \"seg_la\"}"
|
| 151 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 152 |
### vLLM
|
| 153 |
-
|
| 154 |
## Citation
|
|
|
|
| 19 |
|
| 20 |
## Introduction
|
| 21 |
|
| 22 |
+
Today, we are officially open-sourcing Ring-mini-linear-2.0.
|
| 23 |
+
This model continues to employ a hybrid architecture that combines linear attention and standard attention mechanisms, striking a balance between performance and efficiency. Inheriting the efficient MoE (Mixture-of-Experts) design from the Ling 2.0 series, and through architectural optimizations such as a 1/32 expert activation ratio and MTP layers, Ring-mini-linear achieves the performance of an ~8B dense model while activating only 1.4B of its 16B total parameters. This model is continually trained from Ling-mini-base-2.0.
|
| 24 |
+
In terms of performance, the hybrid linear model is comparable in overall performance to standard attention models of a similar size (e.g., ring-mini-2) and surpasses other open-source MoE and Dense models of the same class on several challenging benchmarks. Furthermore, it natively supports a 128k long context window, demonstrating superior speed and accuracy, especially on tasks involving long inputs and outputs.
|
|
|
|
| 25 |
|
| 26 |
<div style="display: flex; justify-content: center;">
|
| 27 |
<div style="text-align: center;">
|
|
|
|
| 31 |
</div>
|
| 32 |
|
| 33 |
## Evaluation
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
|
| 35 |
+
To better demonstrate our model's reasoning capabilities, we compared it with three other models—Ring-mini-2.0, Qwen3-8B-thinking, and GPT-OSS-20B-Medium—on 5 challenging reasoning benchmarks across mathematics, code, and science. We observe that the mixed-linear architecture achieves performance comparable to that of softmax attention.
|
| 36 |
+
|
| 37 |
<div style="display: flex; justify-content: center;">
|
| 38 |
<div style="text-align: center;">
|
| 39 |
+
<img src="https://mdn.alipayobjects.com/huamei_jcuiuk/afts/img/4T3LQaJ2a1AAAAAAagAAAAgADr6CAQFr/original" width="1000">
|
| 40 |
+
<p style="margin-top: 8px; font-size: 14px;"><strong>Figure 2:</strong> Model Performance Comparison </p>
|
| 41 |
</div>
|
| 42 |
</div>
|
| 43 |
|
|
|
|
| 44 |
## Linear Attention, Highly Sparse,High-Speed Generation
|
| 45 |
|
| 46 |
Thanks to its hybrid attention mechanism and highly sparse MoE architecture, Ring-mini-linear-2.0 achieves near-linear time complexity and constant space complexity, resulting in outstanding inference efficiency. To fully demonstrate this advantage, we conducted a head-to-head comparison between our model and top-tier competitors of similar size or performance.
|
|
|
|
| 132 |
print("*" * 30)
|
| 133 |
```
|
| 134 |
|
| 135 |
+
### 🚀 SGLang
|
| 136 |
+
|
| 137 |
+
#### Environment Preparation
|
| 138 |
+
|
| 139 |
+
We will later submit our model to SGLang official release, now we can prepare the environment following steps:
|
| 140 |
+
```shell
|
| 141 |
+
pip3 install sgl-kernel==0.3.9.post2 vllm==0.10.2
|
| 142 |
+
```
|
| 143 |
+
|
| 144 |
+
Then you should install our sglang whl package:
|
| 145 |
+
```shell
|
| 146 |
+
pip install https://github.com/inclusionAI/Ring-V2/blob/main/hybrid_linear/whls/sglang-0.5.2-py3-none-any.whl
|
| 147 |
+
```
|
| 148 |
+
|
| 149 |
+
#### Run Inference
|
| 150 |
+
|
| 151 |
+
BF16 and FP8 models are supported by SGLang now, it depends on the dtype of the model in ${MODEL_PATH}. They both share the same command in the following:
|
| 152 |
+
|
| 153 |
+
- Start server:
|
| 154 |
+
```shell
|
| 155 |
python -m sglang.launch_server \
|
| 156 |
--model-path <model_path> \
|
| 157 |
--trust-remote-code \
|
|
|
|
| 159 |
--disable-radix-cache \
|
| 160 |
--json-model-override-args "{\"linear_backend\": \"seg_la\"}"
|
| 161 |
```
|
| 162 |
+
|
| 163 |
+
- Client:
|
| 164 |
+
|
| 165 |
+
```shell
|
| 166 |
+
curl -s http://localhost:${PORT}/v1/chat/completions \
|
| 167 |
+
-H "Content-Type: application/json" \
|
| 168 |
+
-d '{"model": "auto", "messages": [{"role": "user", "content": "What is the capital of France?"}]}'
|
| 169 |
+
```
|
| 170 |
+
|
| 171 |
+
More usage can be found [here](https://docs.sglang.ai/basic_usage/send_request.html)
|
| 172 |
+
|
| 173 |
### vLLM
|
| 174 |
+
TODO
|
| 175 |
## Citation
|