Upload README
#3
by B2own - opened
- README.md +343 -0
- README_zh.md +343 -0
README.md
ADDED
|
@@ -0,0 +1,343 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# WebMainBench
|
| 2 |
+
|
| 3 |
+
[简体中文](README_zh.md) | English
|
| 4 |
+
|
| 5 |
+
[](https://huggingface.co/datasets/opendatalab/WebMainBench)
|
| 6 |
+
[](https://arxiv.org/abs/2511.23119)
|
| 7 |
+
[](LICENSE)
|
| 8 |
+
|
| 9 |
+
**WebMainBench** is a high-precision benchmark for evaluating web main content extraction. It provides:
|
| 10 |
+
|
| 11 |
+
- A **7,809-page, 100% human-annotated** evaluation dataset covering 5,434 unique domains, 150 TLDs, and 46 languages.
|
| 12 |
+
- A **545-sample subset** with manually calibrated ground-truth markdown (`groundtruth_content`), enabling fine-grained metric evaluation across text, code, formula, and table dimensions.
|
| 13 |
+
- A unified **evaluation toolkit** (`webmainbench`) that scores extractors with both ROUGE-N and content-type-specific edit-distance metrics.
|
| 14 |
+
|
| 15 |
+
> WebMainBench is introduced in the paper [*Dripper: Token-Efficient Main HTML Extraction with a Lightweight LM*](https://arxiv.org/abs/2511.23119) and serves as the primary benchmark for the [MinerU-HTML](https://github.com/opendatalab/MinerU-HTML) project.
|
| 16 |
+
|
| 17 |
+
## Architecture
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
|
| 21 |
+
**Core Modules:**
|
| 22 |
+
|
| 23 |
+
| Module | Description |
|
| 24 |
+
|---|---|
|
| 25 |
+
| `data` | Dataset loading, saving, and sample management |
|
| 26 |
+
| `extractors` | Unified interface for content extractors and a factory registry |
|
| 27 |
+
| `metrics` | Edit-distance, TEDS, and ROUGE metric implementations |
|
| 28 |
+
| `evaluator` | Orchestrates extraction, scoring, and report generation |
|
| 29 |
+
|
| 30 |
+
## Dataset Statistics
|
| 31 |
+
|
| 32 |
+
The full dataset (7,809 samples) is annotated at the HTML tag level through a rigorous 3-round process (annotator → reviewer → senior inspector).
|
| 33 |
+
|
| 34 |
+
**Language Distribution (Top 10 of 46)**
|
| 35 |
+
|
| 36 |
+
| Language | Count | % |
|
| 37 |
+
|---|---|---|
|
| 38 |
+
| English | 6,711 | 85.09 |
|
| 39 |
+
| Chinese | 716 | 9.08 |
|
| 40 |
+
| Spanish | 61 | 0.77 |
|
| 41 |
+
| German | 51 | 0.65 |
|
| 42 |
+
| Japanese | 48 | 0.61 |
|
| 43 |
+
| Russian | 45 | 0.57 |
|
| 44 |
+
| French | 36 | 0.46 |
|
| 45 |
+
| Italian | 22 | 0.28 |
|
| 46 |
+
| Korean | 20 | 0.25 |
|
| 47 |
+
| Portuguese | 17 | 0.22 |
|
| 48 |
+
|
| 49 |
+
**TLD Distribution (Top 10 of 150)**
|
| 50 |
+
|
| 51 |
+
| TLD | Count | % |
|
| 52 |
+
|---|---|---|
|
| 53 |
+
| .com | 4,550 | 57.69 |
|
| 54 |
+
| .org | 816 | 10.35 |
|
| 55 |
+
| .cn | 459 | 5.82 |
|
| 56 |
+
| .net | 318 | 4.03 |
|
| 57 |
+
| .uk | 235 | 2.98 |
|
| 58 |
+
| .edu | 180 | 2.28 |
|
| 59 |
+
| .de | 101 | 1.28 |
|
| 60 |
+
| .au | 94 | 1.19 |
|
| 61 |
+
| .ru | 69 | 0.87 |
|
| 62 |
+
| .gov | 59 | 0.75 |
|
| 63 |
+
|
| 64 |
+
**Page Style & Difficulty**
|
| 65 |
+
|
| 66 |
+
Pages are classified by GPT-5 into styles (Article, Content Listing, Forum, etc.) and assigned difficulty levels (Simple / Mid / Hard) based on DOM structural complexity, text distribution sparsity, content-type diversity, and link density.
|
| 67 |
+
|
| 68 |
+
## Evaluation Metrics
|
| 69 |
+
|
| 70 |
+
WebMainBench supports two complementary evaluation protocols:
|
| 71 |
+
|
| 72 |
+
### ROUGE-N F1 (primary metric from the paper)
|
| 73 |
+
|
| 74 |
+
All extracted content is converted to canonical Markdown via `html2text`, then scored with ROUGE-N (N=5, jieba tokenization). This is the metric reported in the [Dripper paper](https://arxiv.org/abs/2511.23119).
|
| 75 |
+
|
| 76 |
+
### Fine-Grained Edit-Distance Metrics (from this toolkit)
|
| 77 |
+
|
| 78 |
+
Computed on the 545-sample subset with manually calibrated `groundtruth_content`:
|
| 79 |
+
|
| 80 |
+
| Metric | Formula | Description |
|
| 81 |
+
|---|---|---|
|
| 82 |
+
| `overall` | arithmetic mean of the five sub-metrics | Composite quality score |
|
| 83 |
+
| `text_edit` | 1 − edit\_dist / max(len\_pred, len\_gt) | Plain-text similarity |
|
| 84 |
+
| `code_edit` | same, on code blocks only | Code content similarity |
|
| 85 |
+
| `formula_edit` | same, on formulas only | Formula content similarity |
|
| 86 |
+
| `table_edit` | same, on table text only | Table content similarity |
|
| 87 |
+
| `table_TEDS` | 1 − tree\_edit\_dist / max(nodes\_pred, nodes\_gt) | Table structure similarity |
|
| 88 |
+
|
| 89 |
+
All scores are in **[0, 1]**; higher is better.
|
| 90 |
+
|
| 91 |
+
## Leaderboard
|
| 92 |
+
|
| 93 |
+
### ROUGE-N F1 on Full Dataset (7,809 samples)
|
| 94 |
+
|
| 95 |
+
Results from the [Dripper paper](https://arxiv.org/abs/2511.23119) (Table 2):
|
| 96 |
+
|
| 97 |
+
| Extractor | Mode | All | Simple | Mid | Hard |
|
| 98 |
+
|---|---|---|---|---|---|
|
| 99 |
+
| DeepSeek-V3.2* | Html+MD | 0.9098 | 0.9415 | 0.9104 | 0.8771 |
|
| 100 |
+
| GPT-5* | Html+MD | 0.9024 | 0.9382 | 0.9042 | 0.8638 |
|
| 101 |
+
| Gemini-2.5-Pro* | Html+MD | 0.8979 | 0.9345 | 0.8978 | 0.8610 |
|
| 102 |
+
| **Dripper_fallback** | Html+MD | **0.8925** | 0.9325 | 0.8958 | 0.8477 |
|
| 103 |
+
| **Dripper** (0.6B) | Html+MD | **0.8779** | 0.9205 | 0.8804 | 0.8313 |
|
| 104 |
+
| magic-html | Html+MD | 0.7138 | 0.7857 | 0.7121 | 0.6434 |
|
| 105 |
+
| Readability | Html+MD | 0.6543 | 0.7415 | 0.6550 | 0.5652 |
|
| 106 |
+
| Trafilatura | Html+MD | 0.6402 | 0.7309 | 0.6417 | 0.5466 |
|
| 107 |
+
| Resiliparse | TEXT | 0.6290 | 0.7140 | 0.6323 | 0.5388 |
|
| 108 |
+
|
| 109 |
+
\* Frontier models used as drop-in replacements within the Dripper pipeline.
|
| 110 |
+
|
| 111 |
+
### Fine-Grained Metrics on 545-Sample Subset
|
| 112 |
+
|
| 113 |
+
| Extractor | Version | overall | text\_edit | code\_edit | formula\_edit | table\_edit | table\_TEDS |
|
| 114 |
+
|---|---|---|---|---|---|---|---|
|
| 115 |
+
| **mineru-html** | 4.1.1 | **0.8256** | 0.8621 | 0.9093 | 0.9399 | 0.6780 | 0.7388 |
|
| 116 |
+
| magic-html | 0.1.5 | 0.5141 | 0.7791 | 0.4117 | 0.7204 | 0.2611 | 0.3984 |
|
| 117 |
+
| trafilatura (md) | 2.0.0 | 0.3858 | 0.6887 | 0.1305 | 0.6242 | 0.1653 | 0.3203 |
|
| 118 |
+
| resiliparse | 0.14.5 | 0.2954 | 0.7381 | 0.0641 | 0.6747 | 0.0000 | 0.0000 |
|
| 119 |
+
| trafilatura (txt) | 2.0.0 | 0.2657 | 0.7126 | 0.0000 | 0.6162 | 0.0000 | 0.0000 |
|
| 120 |
+
|
| 121 |
+
Contributions of new extractor results are welcome — open a PR!
|
| 122 |
+
|
| 123 |
+
## Quick Start
|
| 124 |
+
|
| 125 |
+
### Installation
|
| 126 |
+
|
| 127 |
+
```bash
|
| 128 |
+
pip install webmainbench
|
| 129 |
+
|
| 130 |
+
# Or install from source
|
| 131 |
+
git clone https://github.com/opendatalab/WebMainBench.git
|
| 132 |
+
cd WebMainBench
|
| 133 |
+
pip install -e .
|
| 134 |
+
```
|
| 135 |
+
|
| 136 |
+
### Download the Dataset
|
| 137 |
+
|
| 138 |
+
The dataset is hosted on Hugging Face: [opendatalab/WebMainBench](https://huggingface.co/datasets/opendatalab/WebMainBench)
|
| 139 |
+
|
| 140 |
+
```python
|
| 141 |
+
from huggingface_hub import hf_hub_download
|
| 142 |
+
|
| 143 |
+
# Full dataset (7,809 samples) — used for ROUGE-N F1 evaluation
|
| 144 |
+
hf_hub_download(
|
| 145 |
+
repo_id="opendatalab/WebMainBench",
|
| 146 |
+
repo_type="dataset",
|
| 147 |
+
filename="webmainbench.jsonl",
|
| 148 |
+
local_dir="data/",
|
| 149 |
+
)
|
| 150 |
+
|
| 151 |
+
# 545-sample subset — used for Fine-Grained Edit-Distance Metrics evaluation
|
| 152 |
+
hf_hub_download(
|
| 153 |
+
repo_id="opendatalab/WebMainBench",
|
| 154 |
+
repo_type="dataset",
|
| 155 |
+
filename="WebMainBench_545.jsonl",
|
| 156 |
+
local_dir="data/",
|
| 157 |
+
)
|
| 158 |
+
```
|
| 159 |
+
|
| 160 |
+
### ROUGE-N F1 Evaluation (webmainbench.jsonl)
|
| 161 |
+
|
| 162 |
+
Use the evaluation scripts in the [MinerU-HTML](https://github.com/opendatalab/MinerU-HTML) repository:
|
| 163 |
+
|
| 164 |
+
```bash
|
| 165 |
+
# Clone MinerU-HTML and prepare the full dataset (webmainbench.jsonl)
|
| 166 |
+
git clone https://github.com/opendatalab/MinerU-HTML.git
|
| 167 |
+
cd MinerU-HTML
|
| 168 |
+
|
| 169 |
+
# Run evaluation (example for MinerU-HTML extractor)
|
| 170 |
+
python eval_baselines.py \
|
| 171 |
+
--bench benchmark/webmainbench.jsonl \
|
| 172 |
+
--task_dir benchmark_results/mineru_html-html-md \
|
| 173 |
+
--extractor_name mineru_html-html-md \
|
| 174 |
+
--model_path YOUR_MODEL_PATH \
|
| 175 |
+
--default_config gpu
|
| 176 |
+
|
| 177 |
+
# For CPU-based extractors (e.g. trafilatura, resiliparse, magic-html)
|
| 178 |
+
python eval_baselines.py \
|
| 179 |
+
--bench benchmark/webmainbench.jsonl \
|
| 180 |
+
--task_dir benchmark_results/trafilatura-html-md \
|
| 181 |
+
--extractor_name trafilatura-html-md
|
| 182 |
+
```
|
| 183 |
+
|
| 184 |
+
Results are written to `benchmark_results/<extractor>/mean_eval_result.json`. See `run_eval.sh` for a complete multi-extractor example.
|
| 185 |
+
|
| 186 |
+
### Fine-Grained Edit-Distance Metrics Evaluation (WebMainBench_545.jsonl)
|
| 187 |
+
|
| 188 |
+
#### Configure LLM (Optional)
|
| 189 |
+
|
| 190 |
+
LLM-enhanced content splitting improves formula/table/code extraction accuracy. To enable it, copy `.env.example` to `.env` and fill in your API credentials:
|
| 191 |
+
|
| 192 |
+
```bash
|
| 193 |
+
cp .env.example .env
|
| 194 |
+
# Edit .env and set LLM_BASE_URL, LLM_API_KEY, LLM_MODEL
|
| 195 |
+
```
|
| 196 |
+
|
| 197 |
+
#### Run an Evaluation
|
| 198 |
+
|
| 199 |
+
```python
|
| 200 |
+
from webmainbench import DataLoader, Evaluator, ExtractorFactory
|
| 201 |
+
|
| 202 |
+
dataset = DataLoader.load_jsonl("data/WebMainBench_545.jsonl")
|
| 203 |
+
result = Evaluator().evaluate(dataset, ExtractorFactory.create("trafilatura"))
|
| 204 |
+
|
| 205 |
+
m = result.overall_metrics
|
| 206 |
+
|
| 207 |
+
print(f"Overall Score: {result.overall_metrics['overall']:.4f}")
|
| 208 |
+
```
|
| 209 |
+
|
| 210 |
+
#### Compare Multiple Extractors
|
| 211 |
+
|
| 212 |
+
```python
|
| 213 |
+
extractors = ["trafilatura", "resiliparse", "magic-html"]
|
| 214 |
+
results = evaluator.compare_extractors(dataset, extractors)
|
| 215 |
+
|
| 216 |
+
for name, result in results.items():
|
| 217 |
+
print(f"{name}: {result.overall_metrics['overall']:.4f}")
|
| 218 |
+
```
|
| 219 |
+
|
| 220 |
+
A complete example is available at `examples/multi_extractor_compare.py`.
|
| 221 |
+
|
| 222 |
+
## Dataset Format
|
| 223 |
+
|
| 224 |
+
Each JSONL line represents one web page:
|
| 225 |
+
|
| 226 |
+
```json
|
| 227 |
+
{
|
| 228 |
+
"track_id": "0b7f2636-d35f-40bf-9b7f-94be4bcbb396",
|
| 229 |
+
"url": "https://example.com/page",
|
| 230 |
+
"html": "<html>...<h1 cc-select=\"true\">Title</h1>...</html>",
|
| 231 |
+
"main_html": "<h1>Title</h1><p>Body text...</p>",
|
| 232 |
+
"convert_main_content": "# Title\n\nBody text...",
|
| 233 |
+
"groundtruth_content": "# Title\n\nBody text...",
|
| 234 |
+
"meta": {
|
| 235 |
+
"language": "en",
|
| 236 |
+
"style": "Article",
|
| 237 |
+
"level": "mid",
|
| 238 |
+
"table": [],
|
| 239 |
+
"code": ["interline"],
|
| 240 |
+
"equation": ["inline"]
|
| 241 |
+
}
|
| 242 |
+
}
|
| 243 |
+
```
|
| 244 |
+
|
| 245 |
+
| Field | Description |
|
| 246 |
+
|---|---|
|
| 247 |
+
| `track_id` | Unique sample identifier (UUID) |
|
| 248 |
+
| `url` | Original page URL |
|
| 249 |
+
| `html` | Full page HTML; human-annotated regions carry `cc-select="true"` |
|
| 250 |
+
| `main_html` | Ground-truth HTML subtree pruned from `html` (available for all 7,809 samples) |
|
| 251 |
+
| `convert_main_content` | Markdown converted from `main_html` via `html2text` (available for all 7,809 samples) |
|
| 252 |
+
| `groundtruth_content` | Manually calibrated ground-truth markdown (available for the 545-sample subset) |
|
| 253 |
+
| `meta.language` | Language code — `en`, `zh`, `es`, `de`, `ja`, `ko`, `ru`, … (46 languages) |
|
| 254 |
+
| `meta.style` | Page style — `Article`, `Content Listing`, `Forum_or_Article_with_commentsection`, `Other` |
|
| 255 |
+
| `meta.level` | Complexity — `simple`, `mid`, `hard` |
|
| 256 |
+
| `meta.table` | Table types: `[]`, `["data"]`, `["layout"]`, `["data", "layout"]` |
|
| 257 |
+
| `meta.code` | Code types: `[]`, `["inline"]`, `["interline"]`, `["inline", "interline"]` |
|
| 258 |
+
| `meta.equation` | Formula types: `[]`, `["inline"]`, `["interline"]`, `["inline", "interline"]` |
|
| 259 |
+
|
| 260 |
+
## Supported Extractors
|
| 261 |
+
|
| 262 |
+
| Extractor | Package | Output |
|
| 263 |
+
|---|---|---|
|
| 264 |
+
| `mineru-html` | [MinerU-HTML](https://github.com/opendatalab/MinerU-HTML) | HTML → Markdown |
|
| 265 |
+
| `trafilatura` | [trafilatura](https://github.com/adbar/trafilatura) | Markdown or plain text |
|
| 266 |
+
| `resiliparse` | [resiliparse](https://resiliparse.chatnoir.eu/) | Plain text |
|
| 267 |
+
| `magic-html` | [magic-html](https://github.com/opendatalab/magic-html) | HTML |
|
| 268 |
+
| Custom | Inherit from `BaseExtractor` | Any |
|
| 269 |
+
|
| 270 |
+
## Advanced Usage
|
| 271 |
+
|
| 272 |
+
### Custom Extractor
|
| 273 |
+
|
| 274 |
+
```python
|
| 275 |
+
from webmainbench.extractors import BaseExtractor, ExtractionResult, ExtractorFactory
|
| 276 |
+
|
| 277 |
+
class MyExtractor(BaseExtractor):
|
| 278 |
+
def _setup(self):
|
| 279 |
+
pass
|
| 280 |
+
|
| 281 |
+
def _extract_content(self, html, url=None):
|
| 282 |
+
content = your_extraction_logic(html)
|
| 283 |
+
return ExtractionResult(content=content, content_list=[], success=True)
|
| 284 |
+
|
| 285 |
+
ExtractorFactory.register("my-extractor", MyExtractor)
|
| 286 |
+
```
|
| 287 |
+
|
| 288 |
+
### Custom Metric
|
| 289 |
+
|
| 290 |
+
```python
|
| 291 |
+
from webmainbench.metrics import BaseMetric, MetricResult
|
| 292 |
+
|
| 293 |
+
class CustomMetric(BaseMetric):
|
| 294 |
+
def _setup(self):
|
| 295 |
+
pass
|
| 296 |
+
|
| 297 |
+
def _calculate_score(self, predicted, groundtruth, **kwargs):
|
| 298 |
+
score = your_scoring_logic(predicted, groundtruth)
|
| 299 |
+
return MetricResult(metric_name=self.name, score=score, details={})
|
| 300 |
+
|
| 301 |
+
evaluator.metric_calculator.add_metric("custom", CustomMetric("custom"))
|
| 302 |
+
```
|
| 303 |
+
|
| 304 |
+
### Output Files
|
| 305 |
+
|
| 306 |
+
After evaluation, the following files are generated in `results/`:
|
| 307 |
+
|
| 308 |
+
| File | Description |
|
| 309 |
+
|---|---|
|
| 310 |
+
| `leaderboard.csv` | Per-extractor overall and per-metric scores |
|
| 311 |
+
| `evaluation_results.json` | Full evaluation details with metadata |
|
| 312 |
+
| `dataset_with_results.jsonl` | Original samples enriched with extraction outputs |
|
| 313 |
+
|
| 314 |
+
## Project Structure
|
| 315 |
+
|
| 316 |
+
```
|
| 317 |
+
webmainbench/
|
| 318 |
+
├── data/ # Dataset loading and saving
|
| 319 |
+
├── extractors/ # Extractor implementations and factory
|
| 320 |
+
├── metrics/ # Metric implementations and calculator
|
| 321 |
+
├── evaluator/ # Orchestrates extraction + scoring
|
| 322 |
+
└── utils/ # Logging and helper functions
|
| 323 |
+
```
|
| 324 |
+
|
| 325 |
+
## Citation
|
| 326 |
+
|
| 327 |
+
If you use WebMainBench in your research, please cite the Dripper paper:
|
| 328 |
+
|
| 329 |
+
```bibtex
|
| 330 |
+
@misc{liu2025dripper,
|
| 331 |
+
title = {Dripper: Token-Efficient Main HTML Extraction with a Lightweight LM},
|
| 332 |
+
author = {Mengjie Liu and Jiahui Peng and Pei Chu and Jiantao Qiu and Ren Ma and He Zhu and Rui Min and Lindong Lu and Wenchang Ning and Linfeng Hou and Kaiwen Liu and Yuan Qu and Zhenxiang Li and Chao Xu and Zhongying Tu and Wentao Zhang and Conghui He},
|
| 333 |
+
year = {2025},
|
| 334 |
+
eprint = {2511.23119},
|
| 335 |
+
archivePrefix = {arXiv},
|
| 336 |
+
primaryClass = {cs.CL},
|
| 337 |
+
url = {https://arxiv.org/abs/2511.23119},
|
| 338 |
+
}
|
| 339 |
+
```
|
| 340 |
+
|
| 341 |
+
## License
|
| 342 |
+
|
| 343 |
+
This project is licensed under the Apache License 2.0 — see [LICENSE](LICENSE) for details.
|
README_zh.md
ADDED
|
@@ -0,0 +1,343 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# WebMainBench
|
| 2 |
+
|
| 3 |
+
简体中文 | [English](README.md)
|
| 4 |
+
|
| 5 |
+
[](https://huggingface.co/datasets/opendatalab/WebMainBench)
|
| 6 |
+
[](https://arxiv.org/abs/2511.23119)
|
| 7 |
+
[](LICENSE)
|
| 8 |
+
|
| 9 |
+
**WebMainBench** 是一个用于评测网页正文抽取质量的高精度基准,提供:
|
| 10 |
+
|
| 11 |
+
- **7,809 页、100% 人工标注**的评测数据集,覆盖 5,434 个独立域名、150 个顶级域名和 46 种语言。
|
| 12 |
+
- **545 条样本子集**,附带人工校准的 ground-truth markdown(`groundtruth_content`),支持文本、代码、公式、表格维度的细粒度指标评测。
|
| 13 |
+
- 统一的 **评测工具包**(`webmainbench`),同时支持 ROUGE-N 和内容类型特定的编辑距离指标。
|
| 14 |
+
|
| 15 |
+
> WebMainBench 在论文 [*Dripper: Token-Efficient Main HTML Extraction with a Lightweight LM*](https://arxiv.org/abs/2511.23119) 中被提出,是 [MinerU-HTML](https://github.com/opendatalab/MinerU-HTML) 项目的核心评测基准。
|
| 16 |
+
|
| 17 |
+
## 系统架构
|
| 18 |
+
|
| 19 |
+

|
| 20 |
+
|
| 21 |
+
**核心模块:**
|
| 22 |
+
|
| 23 |
+
| 模块 | 说明 |
|
| 24 |
+
|---|---|
|
| 25 |
+
| `data` | 数据集加载、保存与样本管理 |
|
| 26 |
+
| `extractors` | 抽取器的统一接口与工厂注册 |
|
| 27 |
+
| `metrics` | 编辑距离、TEDS、ROUGE 指标实现 |
|
| 28 |
+
| `evaluator` | 编排抽取、评分和报告生成 |
|
| 29 |
+
|
| 30 |
+
## 数据集统计
|
| 31 |
+
|
| 32 |
+
完整数据集(7,809 条)在 HTML 标签级别通过严格的三轮流程标注(标注员 → 审核员 → 高级检查员)。
|
| 33 |
+
|
| 34 |
+
**语言分布(46 种语言中的前 10)**
|
| 35 |
+
|
| 36 |
+
| 语言 | 数量 | 占比 |
|
| 37 |
+
|---|---|---|
|
| 38 |
+
| 英语 | 6,711 | 85.09% |
|
| 39 |
+
| 中文 | 716 | 9.08% |
|
| 40 |
+
| 西班牙语 | 61 | 0.77% |
|
| 41 |
+
| 德语 | 51 | 0.65% |
|
| 42 |
+
| 日语 | 48 | 0.61% |
|
| 43 |
+
| 俄语 | 45 | 0.57% |
|
| 44 |
+
| 法语 | 36 | 0.46% |
|
| 45 |
+
| 意大利语 | 22 | 0.28% |
|
| 46 |
+
| 韩语 | 20 | 0.25% |
|
| 47 |
+
| 葡萄牙语 | 17 | 0.22% |
|
| 48 |
+
|
| 49 |
+
**TLD 分布(150 个中的前 10)**
|
| 50 |
+
|
| 51 |
+
| TLD | 数量 | 占比 |
|
| 52 |
+
|---|---|---|
|
| 53 |
+
| .com | 4,550 | 57.69% |
|
| 54 |
+
| .org | 816 | 10.35% |
|
| 55 |
+
| .cn | 459 | 5.82% |
|
| 56 |
+
| .net | 318 | 4.03% |
|
| 57 |
+
| .uk | 235 | 2.98% |
|
| 58 |
+
| .edu | 180 | 2.28% |
|
| 59 |
+
| .de | 101 | 1.28% |
|
| 60 |
+
| .au | 94 | 1.19% |
|
| 61 |
+
| .ru | 69 | 0.87% |
|
| 62 |
+
| .gov | 59 | 0.75% |
|
| 63 |
+
|
| 64 |
+
**页面类型与难度**
|
| 65 |
+
|
| 66 |
+
页面通过 GPT-5 分类为不同类型(Article、Content Listing、Forum 等),并基于 DOM 结构复杂度、文本分布稀疏度、内容类型多样性和链接密度计算综合复杂度分数,划分为 Simple / Mid / Hard 三个难度等级。
|
| 67 |
+
|
| 68 |
+
## 评测指标
|
| 69 |
+
|
| 70 |
+
WebMainBench 支持两套互补的评测协议:
|
| 71 |
+
|
| 72 |
+
### ROUGE-N F1(论文主指标)
|
| 73 |
+
|
| 74 |
+
所有抽取内容通过 `html2text` 转换为标准 Markdown,再用 ROUGE-N(N=5,jieba 分词)评分。这是 [Dripper 论文](https://arxiv.org/abs/2511.23119) 中报告的指标。
|
| 75 |
+
|
| 76 |
+
### 细粒度编辑距离指标(本工具包提供)
|
| 77 |
+
|
| 78 |
+
基于 545 条人工校准 `groundtruth_content` 的子集计算:
|
| 79 |
+
|
| 80 |
+
| 指标 | 公式 | 说明 |
|
| 81 |
+
|---|---|---|
|
| 82 |
+
| `overall` | 以下五项的算术平均 | 综合质量评分 |
|
| 83 |
+
| `text_edit` | 1 − edit\_dist / max(len\_pred, len\_gt) | 纯文本相似度 |
|
| 84 |
+
| `code_edit` | 同上,仅代码块 | 代码内容相似度 |
|
| 85 |
+
| `formula_edit` | 同上,仅公式 | 公式内容相似度 |
|
| 86 |
+
| `table_edit` | 同上,仅表格文本 | 表格内容相似度 |
|
| 87 |
+
| `table_TEDS` | 1 − tree\_edit\_dist / max(nodes\_pred, nodes\_gt) | 表格结构相似度 |
|
| 88 |
+
|
| 89 |
+
所有分数范围为 **[0, 1]**,越高越好。
|
| 90 |
+
|
| 91 |
+
## 排行榜
|
| 92 |
+
|
| 93 |
+
### ROUGE-N F1 — 全量数据集(7,809 条)
|
| 94 |
+
|
| 95 |
+
来自 [Dripper 论文](https://arxiv.org/abs/2511.23119)(表 2):
|
| 96 |
+
|
| 97 |
+
| 抽取器 | 模式 | All | Simple | Mid | Hard |
|
| 98 |
+
|---|---|---|---|---|---|
|
| 99 |
+
| DeepSeek-V3.2* | Html+MD | 0.9098 | 0.9415 | 0.9104 | 0.8771 |
|
| 100 |
+
| GPT-5* | Html+MD | 0.9024 | 0.9382 | 0.9042 | 0.8638 |
|
| 101 |
+
| Gemini-2.5-Pro* | Html+MD | 0.8979 | 0.9345 | 0.8978 | 0.8610 |
|
| 102 |
+
| **Dripper_fallback** | Html+MD | **0.8925** | 0.9325 | 0.8958 | 0.8477 |
|
| 103 |
+
| **Dripper** (0.6B) | Html+MD | **0.8779** | 0.9205 | 0.8804 | 0.8313 |
|
| 104 |
+
| magic-html | Html+MD | 0.7138 | 0.7857 | 0.7121 | 0.6434 |
|
| 105 |
+
| Readability | Html+MD | 0.6543 | 0.7415 | 0.6550 | 0.5652 |
|
| 106 |
+
| Trafilatura | Html+MD | 0.6402 | 0.7309 | 0.6417 | 0.5466 |
|
| 107 |
+
| Resiliparse | TEXT | 0.6290 | 0.7140 | 0.6323 | 0.5388 |
|
| 108 |
+
|
| 109 |
+
\* 前沿大模型在 Dripper 流水线中作为替代标注器使用。
|
| 110 |
+
|
| 111 |
+
### 细粒度指标 — 545 条子集
|
| 112 |
+
|
| 113 |
+
| 抽取器 | 版本 | overall | text\_edit | code\_edit | formula\_edit | table\_edit | table\_TEDS |
|
| 114 |
+
|---|---|---|---|---|---|---|---|
|
| 115 |
+
| **mineru-html** | 4.1.1 | **0.8256** | 0.8621 | 0.9093 | 0.9399 | 0.6780 | 0.7388 |
|
| 116 |
+
| magic-html | 0.1.5 | 0.5141 | 0.7791 | 0.4117 | 0.7204 | 0.2611 | 0.3984 |
|
| 117 |
+
| trafilatura (md) | 2.0.0 | 0.3858 | 0.6887 | 0.1305 | 0.6242 | 0.1653 | 0.3203 |
|
| 118 |
+
| resiliparse | 0.14.5 | 0.2954 | 0.7381 | 0.0641 | 0.6747 | 0.0000 | 0.0000 |
|
| 119 |
+
| trafilatura (txt) | 2.0.0 | 0.2657 | 0.7126 | 0.0000 | 0.6162 | 0.0000 | 0.0000 |
|
| 120 |
+
|
| 121 |
+
欢迎提交新抽取器的评测结果 — 请提 PR!
|
| 122 |
+
|
| 123 |
+
## 快速开始
|
| 124 |
+
|
| 125 |
+
### 安装
|
| 126 |
+
|
| 127 |
+
```bash
|
| 128 |
+
pip install webmainbench
|
| 129 |
+
|
| 130 |
+
# 或从源码安装
|
| 131 |
+
git clone https://github.com/opendatalab/WebMainBench.git
|
| 132 |
+
cd WebMainBench
|
| 133 |
+
pip install -e .
|
| 134 |
+
```
|
| 135 |
+
|
| 136 |
+
### 下载数据集
|
| 137 |
+
|
| 138 |
+
数据集托管在 Hugging Face:[opendatalab/WebMainBench](https://huggingface.co/datasets/opendatalab/WebMainBench)
|
| 139 |
+
|
| 140 |
+
```python
|
| 141 |
+
from huggingface_hub import hf_hub_download
|
| 142 |
+
|
| 143 |
+
# 全量数据集(7,809 条)— 用于 ROUGE-N F1 评测
|
| 144 |
+
hf_hub_download(
|
| 145 |
+
repo_id="opendatalab/WebMainBench",
|
| 146 |
+
repo_type="dataset",
|
| 147 |
+
filename="webmainbench.jsonl",
|
| 148 |
+
local_dir="data/",
|
| 149 |
+
)
|
| 150 |
+
|
| 151 |
+
# 545 条样本子集 — 用于细粒度编辑距离指标评测
|
| 152 |
+
hf_hub_download(
|
| 153 |
+
repo_id="opendatalab/WebMainBench",
|
| 154 |
+
repo_type="dataset",
|
| 155 |
+
filename="WebMainBench_545.jsonl",
|
| 156 |
+
local_dir="data/",
|
| 157 |
+
)
|
| 158 |
+
```
|
| 159 |
+
|
| 160 |
+
### ROUGE-N F1 评测(webmainbench.jsonl)
|
| 161 |
+
|
| 162 |
+
使用 [MinerU-HTML](https://github.com/opendatalab/MinerU-HTML) 仓库中的评测脚本:
|
| 163 |
+
|
| 164 |
+
```bash
|
| 165 |
+
# 克隆 MinerU-HTML 并准备全量数据集(webmainbench.jsonl)
|
| 166 |
+
git clone https://github.com/opendatalab/MinerU-HTML.git
|
| 167 |
+
cd MinerU-HTML
|
| 168 |
+
|
| 169 |
+
# 运行评测(以 MinerU-HTML 抽取器为例)
|
| 170 |
+
python eval_baselines.py \
|
| 171 |
+
--bench benchmark/webmainbench.jsonl \
|
| 172 |
+
--task_dir benchmark_results/mineru_html-html-md \
|
| 173 |
+
--extractor_name mineru_html-html-md \
|
| 174 |
+
--model_path YOUR_MODEL_PATH \
|
| 175 |
+
--default_config gpu
|
| 176 |
+
|
| 177 |
+
# 对于基于 CPU 的抽取器(如 trafilatura、resiliparse、magic-html)
|
| 178 |
+
python eval_baselines.py \
|
| 179 |
+
--bench benchmark/webmainbench.jsonl \
|
| 180 |
+
--task_dir benchmark_results/trafilatura-html-md \
|
| 181 |
+
--extractor_name trafilatura-html-md
|
| 182 |
+
```
|
| 183 |
+
|
| 184 |
+
结果写入 `benchmark_results/<extractor>/mean_eval_result.json`。完整的多抽取器示例见 `run_eval.sh`。
|
| 185 |
+
|
| 186 |
+
### 细粒度编辑距离指标评测(WebMainBench_545.jsonl)
|
| 187 |
+
|
| 188 |
+
#### 配置 LLM(可选)
|
| 189 |
+
|
| 190 |
+
LLM 增强内容拆分可提升公式/表格/代码的抽取精度。如需启用,将 `.env.example` 复制为 `.env` 并填写 API 信息:
|
| 191 |
+
|
| 192 |
+
```bash
|
| 193 |
+
cp .env.example .env
|
| 194 |
+
# 编辑 .env,设置 LLM_BASE_URL、LLM_API_KEY、LLM_MODEL
|
| 195 |
+
```
|
| 196 |
+
|
| 197 |
+
#### 运行评测
|
| 198 |
+
|
| 199 |
+
```python
|
| 200 |
+
from webmainbench import DataLoader, Evaluator, ExtractorFactory
|
| 201 |
+
|
| 202 |
+
dataset = DataLoader.load_jsonl("data/WebMainBench_545.jsonl")
|
| 203 |
+
result = Evaluator().evaluate(dataset, ExtractorFactory.create("trafilatura"))
|
| 204 |
+
|
| 205 |
+
m = result.overall_metrics
|
| 206 |
+
|
| 207 |
+
print(f"Overall Score: {result.overall_metrics['overall']:.4f}")
|
| 208 |
+
```
|
| 209 |
+
|
| 210 |
+
#### 多抽取器对比
|
| 211 |
+
|
| 212 |
+
```python
|
| 213 |
+
extractors = ["trafilatura", "resiliparse", "magic-html"]
|
| 214 |
+
results = evaluator.compare_extractors(dataset, extractors)
|
| 215 |
+
|
| 216 |
+
for name, result in results.items():
|
| 217 |
+
print(f"{name}: {result.overall_metrics['overall']:.4f}")
|
| 218 |
+
```
|
| 219 |
+
|
| 220 |
+
完整示例见 `examples/multi_extractor_compare.py`。
|
| 221 |
+
|
| 222 |
+
## 数据格式
|
| 223 |
+
|
| 224 |
+
JSONL 文件每行一个网页样本:
|
| 225 |
+
|
| 226 |
+
```json
|
| 227 |
+
{
|
| 228 |
+
"track_id": "0b7f2636-d35f-40bf-9b7f-94be4bcbb396",
|
| 229 |
+
"url": "https://example.com/page",
|
| 230 |
+
"html": "<html>...<h1 cc-select=\"true\">Title</h1>...</html>",
|
| 231 |
+
"main_html": "<h1>Title</h1><p>Body text...</p>",
|
| 232 |
+
"convert_main_content": "# Title\n\nBody text...",
|
| 233 |
+
"groundtruth_content": "# Title\n\nBody text...",
|
| 234 |
+
"meta": {
|
| 235 |
+
"language": "en",
|
| 236 |
+
"style": "Article",
|
| 237 |
+
"level": "mid",
|
| 238 |
+
"table": [],
|
| 239 |
+
"code": ["interline"],
|
| 240 |
+
"equation": ["inline"]
|
| 241 |
+
}
|
| 242 |
+
}
|
| 243 |
+
```
|
| 244 |
+
|
| 245 |
+
| 字段 | 说明 |
|
| 246 |
+
|---|---|
|
| 247 |
+
| `track_id` | 样本唯一标识符(UUID) |
|
| 248 |
+
| `url` | 原始网页 URL |
|
| 249 |
+
| `html` | 完整页面 HTML;人工标注区域带有 `cc-select="true"` 属性 |
|
| 250 |
+
| `main_html` | 从 `html` 剪枝得到的 ground-truth HTML 子树(全部 7,809 条均有) |
|
| 251 |
+
| `convert_main_content` | 通过 `html2text` 从 `main_html` 转换的 Markdown(全部 7,809 条均有) |
|
| 252 |
+
| `groundtruth_content` | 人工校准的 ground-truth markdown(仅 545 条子集提供) |
|
| 253 |
+
| `meta.language` | 语言代码 — `en`、`zh`、`es`、`de`、`ja`、`ko`、`ru` 等(46 种语言) |
|
| 254 |
+
| `meta.style` | 页面类型 — `Article`、`Content Listing`、`Forum_or_Article_with_commentsection`、`Other` |
|
| 255 |
+
| `meta.level` | 复杂度 — `simple`、`mid`、`hard` |
|
| 256 |
+
| `meta.table` | 表格类型:`[]`、`["data"]`、`["layout"]`、`["data", "layout"]` |
|
| 257 |
+
| `meta.code` | 代码类型:`[]`、`["inline"]`、`["interline"]`、`["inline", "interline"]` |
|
| 258 |
+
| `meta.equation` | 公式类型:`[]`、`["inline"]`、`["interline"]`、`["inline", "interline"]` |
|
| 259 |
+
|
| 260 |
+
## 支持的抽取器
|
| 261 |
+
|
| 262 |
+
| 抽取器 | 依赖 | 输出格式 |
|
| 263 |
+
|---|---|---|
|
| 264 |
+
| `mineru-html` | [MinerU-HTML](https://github.com/opendatalab/MinerU-HTML) | HTML → Markdown |
|
| 265 |
+
| `trafilatura` | [trafilatura](https://github.com/adbar/trafilatura) | Markdown 或纯文本 |
|
| 266 |
+
| `resiliparse` | [resiliparse](https://resiliparse.chatnoir.eu/) | 纯文本 |
|
| 267 |
+
| `magic-html` | [magic-html](https://github.com/opendatalab/magic-html) | HTML |
|
| 268 |
+
| 自定义 | 继承 `BaseExtractor` | 任意 |
|
| 269 |
+
|
| 270 |
+
## 进阶用法
|
| 271 |
+
|
| 272 |
+
### 自定义抽取��
|
| 273 |
+
|
| 274 |
+
```python
|
| 275 |
+
from webmainbench.extractors import BaseExtractor, ExtractionResult, ExtractorFactory
|
| 276 |
+
|
| 277 |
+
class MyExtractor(BaseExtractor):
|
| 278 |
+
def _setup(self):
|
| 279 |
+
pass
|
| 280 |
+
|
| 281 |
+
def _extract_content(self, html, url=None):
|
| 282 |
+
content = your_extraction_logic(html)
|
| 283 |
+
return ExtractionResult(content=content, content_list=[], success=True)
|
| 284 |
+
|
| 285 |
+
ExtractorFactory.register("my-extractor", MyExtractor)
|
| 286 |
+
```
|
| 287 |
+
|
| 288 |
+
### 自定义指标
|
| 289 |
+
|
| 290 |
+
```python
|
| 291 |
+
from webmainbench.metrics import BaseMetric, MetricResult
|
| 292 |
+
|
| 293 |
+
class CustomMetric(BaseMetric):
|
| 294 |
+
def _setup(self):
|
| 295 |
+
pass
|
| 296 |
+
|
| 297 |
+
def _calculate_score(self, predicted, groundtruth, **kwargs):
|
| 298 |
+
score = your_scoring_logic(predicted, groundtruth)
|
| 299 |
+
return MetricResult(metric_name=self.name, score=score, details={})
|
| 300 |
+
|
| 301 |
+
evaluator.metric_calculator.add_metric("custom", CustomMetric("custom"))
|
| 302 |
+
```
|
| 303 |
+
|
| 304 |
+
### 输出文件
|
| 305 |
+
|
| 306 |
+
评测完成后在 `results/` 目录生成:
|
| 307 |
+
|
| 308 |
+
| 文件 | 说明 |
|
| 309 |
+
|---|---|
|
| 310 |
+
| `leaderboard.csv` | 各抽取器的总分与分项指标 |
|
| 311 |
+
| `evaluation_results.json` | 完整评测详情及元数据 |
|
| 312 |
+
| `dataset_with_results.jsonl` | 原始样本 + 所有抽取器输出 |
|
| 313 |
+
|
| 314 |
+
## 项目结构
|
| 315 |
+
|
| 316 |
+
```
|
| 317 |
+
webmainbench/
|
| 318 |
+
├── data/ # 数据集加载与保存
|
| 319 |
+
├── extractors/ # 抽取器实现与工厂
|
| 320 |
+
├── metrics/ # 指标实现与计算器
|
| 321 |
+
├── evaluator/ # 编排抽取与评分
|
| 322 |
+
└── utils/ # 日志和辅助函数
|
| 323 |
+
```
|
| 324 |
+
|
| 325 |
+
## 引用
|
| 326 |
+
|
| 327 |
+
如果您在研究中使用了 WebMainBench,请引用 Dripper 论文:
|
| 328 |
+
|
| 329 |
+
```bibtex
|
| 330 |
+
@misc{liu2025dripper,
|
| 331 |
+
title = {Dripper: Token-Efficient Main HTML Extraction with a Lightweight LM},
|
| 332 |
+
author = {Mengjie Liu and Jiahui Peng and Pei Chu and Jiantao Qiu and Ren Ma and He Zhu and Rui Min and Lindong Lu and Wenchang Ning and Linfeng Hou and Kaiwen Liu and Yuan Qu and Zhenxiang Li and Chao Xu and Zhongying Tu and Wentao Zhang and Conghui He},
|
| 333 |
+
year = {2025},
|
| 334 |
+
eprint = {2511.23119},
|
| 335 |
+
archivePrefix = {arXiv},
|
| 336 |
+
primaryClass = {cs.CL},
|
| 337 |
+
url = {https://arxiv.org/abs/2511.23119},
|
| 338 |
+
}
|
| 339 |
+
```
|
| 340 |
+
|
| 341 |
+
## 许可证
|
| 342 |
+
|
| 343 |
+
本项目采用 Apache License 2.0 许可 — 详见 [LICENSE](LICENSE)。
|