OpenSeeker: Democratizing Frontier Search Agents by Fully Open-Sourcing Training Data
Abstract
Deep search capabilities have become an indispensable competency for frontier Large Language Model (LLM) agents, yet the development of high-performance search agents remains dominated by industrial giants due to a lack of transparent, high-quality training data. This persistent data scarcity has fundamentally hindered the progress of the broader research community in developing and innovating within this domain. To bridge this gap, we introduce OpenSeeker, the first fully open-source search agent (i.e., model and data) that achieves frontier-level performance through two core technical innovations: (1) Fact-grounded scalable controllable QA synthesis, which reverse-engineers the web graph via topological expansion and entity obfuscation to generate complex, multi-hop reasoning tasks with controllable coverage and complexity. (2) Denoised trajectory synthesis, which employs a retrospective summarization mechanism to denoise the trajectory, therefore promoting the teacher LLMs to generate high-quality actions. Experimental results demonstrate that OpenSeeker, trained (a single training run) on only 11.7k synthesized samples, achieves state-of-the-art performance across multiple benchmarks including BrowseComp, BrowseComp-ZH, xbench-DeepSearch, and WideSearch. Notably, trained with simple SFT, OpenSeeker significantly outperforms the second-best fully open-source agent DeepDive (e.g., 29.5% v.s. 15.3% on BrowseComp), and even surpasses industrial competitors such as Tongyi DeepResearch (trained via extensive continual pre-training, SFT, and RL) on BrowseComp-ZH (48.4% v.s. 46.7%). We fully open-source the complete training dataset and the model weights to democratize frontier search agent research and foster a more transparent, collaborative ecosystem.
Community

π Shattering the Corporate Data Moat: Meet OpenSeeker, the First Fully Open-Source Frontier Search Agent by a Purely Academic Team.
High-performance Search Agents have long been a "closed-door game." While weights are open, the high-quality training data (the real secret sauce) remained hidden. We introduce OpenSeeker, a purely academic initiative achieving SOTA search while open-sourcing everything: Models and 100% Full Training Data.
π Why OpenSeeker is a Game Changer:
π₯ Efficiency Over Scale: We proved you don't need millions of samples. OpenSeeker achieves frontier performance with just 11.7k synthesized samples using Single-Run SFT with no complex RL or continual pre-training.
πEmpowering Academia:We provide the missinghigh-quality data foundation, enabling researchers to build next-gen agents without corporate-scale resources.
βοΈ Beating Industry Giants:
π¨π³ 48.4% on BrowseComp-ZH: Surpassing Alibaba's Tongyi DeepResearch (46.7%)! We beat their massive CPT + SFT + RL pipeline using SFT only.
π ~30B SFT Models SOTA: 29.5% (BrowseComp), 74.0% (xbench), 59.4% (WideSearch).
π§ͺ The "Secret Sauce" Behind the Data:
πΈοΈ Fact-Grounded QA Synthesis: We reverse-engineer the web graph using Entity Obfuscation to generate complex multi-hop queries.
π§Ή Denoised Trajectory Synthesis: Our Asymmetric Context Training teaches students to predict expert actions from raw, noisy HTML, mastering robust information extraction.
π¦ The "Open" in OpenSeeker: We release the full recipe to democratize deep research: β
11.7k High-Difficulty Data (QA + Trajectories) β
OpenSeeker-v1-30B-SFT Model
π¨βπ» GitHub: GitHub - rui-ye/OpenSeeker: OpenSeeker: A search agent with open-source data and models
π€ Data: https://huggingface.co/datasets/OpenSeeker/OpenSeeker-v1-Data
π€ Models: https://huggingface.co/OpenSeeker/OpenSeeker-v1-30B-SFT
thanks a lot for opensourcing
Models citing this paper 1
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper