Dataset Viewer (First 5GB)
Auto-converted to Parquet Duplicate
Search is not available for this dataset
The dataset viewer is not available for this split.
Parquet error: Scan size limit exceeded: attempted to read 550676852 bytes, limit is 300000000 bytes Make sure that 1. the Parquet files contain a page index to enable random access without loading entire row groups2. otherwise use smaller row-group sizes when serializing the Parquet files
Error code:   TooBigContentError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

BAMBOO: Benchmark for Autonomous ML Build-and-Output Observation

A large-scale benchmark for evaluating AI agents' ability to reproduce ML research papers using the authors' original code.

Dataset Summary

Metric Value
Total papers 6,148
Papers with PDF 5,495 (89%)
Papers with structured MD 3,983 (64%)
Venues ICML, ICLR, NeurIPS, CVPR, ICCV, ACL, EMNLP, AAAI, ICRA
Year 2025
Code coverage 100% (all papers have verified code_url + code_commit)
Abstracts 100%
Difficulty scores 100%

Files

  • bamboo_dataset.json — Full paper metadata (6,148 entries)
  • paper_pdfs/ — Original paper PDFs (5,495 files, ~32GB)
  • paper_markdowns/ — MinerU hybrid-auto-engine extracted markdown (3,983 files)

PDF Extraction

PDFs are extracted using MinerU v2.7.6 with the hybrid-auto-engine backend (highest quality VLM-based extraction). This preserves:

  • Correct paragraph ordering
  • Table structure as markdown
  • Mathematical formulas
  • Figure references

Venue Breakdown (papers with MD)

Venue Papers
ICML 1,109
ICLR 669
ICCV 501
CVPR 408
NeurIPS 359
ACL 327
EMNLP 294
AAAI 275
ICRA 41

Usage

from huggingface_hub import hf_hub_download
import json

# Download metadata
path = hf_hub_download("xln3/bamboo-papers", "bamboo_dataset.json", repo_type="dataset")
papers = json.load(open(path))

# Filter papers with markdown
papers_with_md = [p for p in papers if p["has_md"]]
print(f"{len(papers_with_md)} papers with structured markdown")

# Download a specific paper's markdown
md_path = hf_hub_download("xln3/bamboo-papers", "paper_markdowns/bamboo-00001.md", repo_type="dataset")
Downloads last month
5,141