Spaces:
Configuration error
Configuration error
Update model card with new models and optimize the information flow
Browse files
README.md
CHANGED
|
@@ -3,16 +3,24 @@ license: apache-2.0
|
|
| 3 |
emoji: π
|
| 4 |
pinned: true
|
| 5 |
---
|
| 6 |
-
## Hi there π
|
| 7 |
-
|
| 8 |

|
| 9 |
|
| 10 |
-
Welcome to the official
|
| 11 |
-
|
| 12 |
-
This organisation contains the series of open-source projects from Ant Group with dedicated efforts to work towards Artificial General Intelligence (AGI).
|
| 13 |
|
| 14 |
In here you can find Large Language Models (LLM), Reinforcement Learning (RL) or other systems related to model training and inference, and other AGI-related frameworks or applications.
|
| 15 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
### Get Involved
|
| 17 |
|
| 18 |
Our work is guided by the principles of fairness, transparency, and collaboration, and we are dedicated to creating models that reflect the diversity of the world we live in.
|
|
@@ -20,22 +28,13 @@ Whether you're a researcher, developer, or simply someone passionate about AI, w
|
|
| 20 |
|
| 21 |
- **Explore Our Models**: Check out our latest models and datasets on the inclusionAI Hub.
|
| 22 |
- **Contribute**: Interested in contributing? Visit our [GitHub](https://github.com/inclusionAI) repository to get started.
|
| 23 |
-
- **Join the Conversation**: Connect with
|
| 24 |
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
- [**Ling**](https://huggingface.co/collections/inclusionAI/ling-67c51c85b34a7ea0aba94c32): Ling is an MoE LLM provided and open-sourced by InclusionAI.
|
| 28 |
-
- [**Ming**](https://huggingface.co/collections/inclusionAI/ming-680afbb62c4b584e1d7848e8): Ming-Omni is a unified multimodal model capable of processing images, text, audio, and video, while demonstrating strong proficiency in both speech and image generation.
|
| 29 |
-
- [**Ring**](https://huggingface.co/collections/inclusionAI/ring-67e7e41bba868546ac32b260): Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI, derived from Ling.
|
| 30 |
-
- [**GroveMoE**](https://huggingface.co/collections/inclusionAI/grovemoe-68a2b58acbb55827244ef664): GroveMoE is an open-source family of LLMs developed by the AGI Center, Ant Research Institute.
|
| 31 |
-
- ...
|
| 32 |
-
|
| 33 |
-

|
| 34 |
-
|
| 35 |
-
Most models from inclusionAI can be found at our partners' hosting service. Feel free to experience the models at SiliconFlow (a third party Model as a Service) or ZenMux.ai (a third party model hosting service).
|
| 36 |
|
| 37 |
## What's New
|
| 38 |
-
|
|
|
|
| 39 |
- [2025/9/24] βπ» [inclusionAI/Ling-flash-2.0-GGUF](https://huggingface.co/inclusionAI/Ling-flash-2.0-GGUF)
|
| 40 |
- [2025/9/24] βπ» [inclusionAI/Ring-mini-2.0-GGUF](https://huggingface.co/inclusionAI/Ring-mini-2.0-GGUF)
|
| 41 |
- [2025/9/24] βπ» [inclusionAI/Ling-mini-2.0-GGUF](https://huggingface.co/inclusionAI/Ling-mini-2.0-GGUF)
|
|
@@ -44,5 +43,4 @@ Most models from inclusionAI can be found at our partners' hosting service. Feel
|
|
| 44 |
- [2025/9/17] βπ» [inclusionAI/Ling-flash-2.0](https://huggingface.co/inclusionAI/Ling-flash-2.0)
|
| 45 |
- [2025/9/11] βπ» [inclusionAI/LLaDA-MoE-7B-A1B-Instruct](https://huggingface.co/inclusionAI/LLaDA-MoE-7B-A1B-Instruct)
|
| 46 |
- [2025/9/09] βπ» [inclusionAI/Ring-mini-2.0](https://huggingface.co/inclusionAI/Ring-mini-2.0)
|
| 47 |
-
- [2025/9/09] βπ» [inclusionAI/Ling-mini-2.0](https://huggingface.co/inclusionAI/Ling-mini-2.0) ([Ling-V2 Collection](https://huggingface.co/collections/inclusionAI/ling-v2-68bf1dd2fc34c306c1fa6f86))
|
| 48 |
-
- [2025/8/29] βπ» [inclusionAI/Qwen3-32B-AWorld](https://huggingface.co/inclusionAI/Qwen3-32B-AWorld)
|
|
|
|
| 3 |
emoji: π
|
| 4 |
pinned: true
|
| 5 |
---
|
|
|
|
|
|
|
| 6 |

|
| 7 |
|
| 8 |
+
**Hi there** π Welcome to the official homepage for inclusionAI, home for Ant Group's Artificial General Intelligence (AGI) initiative.
|
|
|
|
|
|
|
| 9 |
|
| 10 |
In here you can find Large Language Models (LLM), Reinforcement Learning (RL) or other systems related to model training and inference, and other AGI-related frameworks or applications.
|
| 11 |
|
| 12 |
+
## Our Models
|
| 13 |
+
|
| 14 |
+
- [**Ling**](https://huggingface.co/collections/inclusionAI/ling-v2-68bf1dd2fc34c306c1fa6f86): The general-use line, with SKUs like mini (lightning-fast) and flash (solid performance), scaling up to our 1T model (under development).
|
| 15 |
+
- [**Ring**](https://huggingface.co/collections/inclusionAI/ring-v2-68db3941a6c4e984dd2015fa): The deep reasoning and cognitive variant, with SKUs from mini (cost-efficient) to flash (well-rounded answers), also featuring a 1T flagship (under development).
|
| 16 |
+
- [**Ming**](https://huggingface.co/collections/inclusionAI/ming-680afbb62c4b584e1d7848e8): The any-to-any models, unified multimodal model capable of processing images, text, audio, and video, while demonstrating strong proficiency in both speech and image generation.
|
| 17 |
+
- [**LLaDA**](https://huggingface.co/collections/inclusionAI/llada-68c141bca386b06b599cfe45): LLaDA diffusion language model developed by the AGI Center, Ant Research Institutute.
|
| 18 |
+
- [**GroveMoE**](https://huggingface.co/collections/inclusionAI/grovemoe-68a2b58acbb55827244ef664): GroveMoE is an open-source family of LLMs developed by the AGI Center, Ant Research Institute.
|
| 19 |
+
- [**UI-Venus**](https://huggingface.co/collections/inclusionAI/ui-venus-689f2fb01a4234cbce91c56a): UI-Venus is a native UI agent based on the Qwen2.5-VL multimodal large language model, designed to perform precise GUI element grounding and effective navigation using only screenshots as input.
|
| 20 |
+
- ...
|
| 21 |
+
|
| 22 |
+

|
| 23 |
+
|
| 24 |
### Get Involved
|
| 25 |
|
| 26 |
Our work is guided by the principles of fairness, transparency, and collaboration, and we are dedicated to creating models that reflect the diversity of the world we live in.
|
|
|
|
| 28 |
|
| 29 |
- **Explore Our Models**: Check out our latest models and datasets on the inclusionAI Hub.
|
| 30 |
- **Contribute**: Interested in contributing? Visit our [GitHub](https://github.com/inclusionAI) repository to get started.
|
| 31 |
+
- **Join the Conversation**: Connect with Ant Ling team at [Twitter](https://x.com/AntLingAGI), Ant Open Source team at [Twitter](https://x.com/ant_oss) or the community at [Discord](https://discord.gg/2X4zBSz9c6) to stay updated on our latest projects and initiatives.
|
| 32 |
|
| 33 |
+
Most models from inclusionAI can be found at our partners' hosting service. Feel free to experience the models at [SiliconFlow](https://www.siliconflow.com/) or [ZenMux.ai](https://zenmux.ai/).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
|
| 35 |
## What's New
|
| 36 |
+
- [2025/9/30] βπ» [inclusionAI/Ring-1T-preview](https://huggingface.co/inclusionAI/Ring-1T-preview)
|
| 37 |
+
- [2025/9/28] βπ» [inclusionAI/Ring-flash-linear-2.0](https://huggingface.co/inclusionAI/Ring-flash-linear-2.0)
|
| 38 |
- [2025/9/24] βπ» [inclusionAI/Ling-flash-2.0-GGUF](https://huggingface.co/inclusionAI/Ling-flash-2.0-GGUF)
|
| 39 |
- [2025/9/24] βπ» [inclusionAI/Ring-mini-2.0-GGUF](https://huggingface.co/inclusionAI/Ring-mini-2.0-GGUF)
|
| 40 |
- [2025/9/24] βπ» [inclusionAI/Ling-mini-2.0-GGUF](https://huggingface.co/inclusionAI/Ling-mini-2.0-GGUF)
|
|
|
|
| 43 |
- [2025/9/17] βπ» [inclusionAI/Ling-flash-2.0](https://huggingface.co/inclusionAI/Ling-flash-2.0)
|
| 44 |
- [2025/9/11] βπ» [inclusionAI/LLaDA-MoE-7B-A1B-Instruct](https://huggingface.co/inclusionAI/LLaDA-MoE-7B-A1B-Instruct)
|
| 45 |
- [2025/9/09] βπ» [inclusionAI/Ring-mini-2.0](https://huggingface.co/inclusionAI/Ring-mini-2.0)
|
| 46 |
+
- [2025/9/09] βπ» [inclusionAI/Ling-mini-2.0](https://huggingface.co/inclusionAI/Ling-mini-2.0) ([Ling-V2 Collection](https://huggingface.co/collections/inclusionAI/ling-v2-68bf1dd2fc34c306c1fa6f86))
|
|
|