You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

GrainBench (Public Split)

GrainBench is a dataset and public benchmark for dense object counting in images of rice and beans. This repository hosts the public development split only. A separate private split of 1468 images is held internally and evaluated through a public leaderboard.

What the dataset contains

The public split contains 596 studio photographs: 308 of beans and 288 of rice. Each image is paired with the grain-type label and a verified integer ground-truth count. Counts range from 10 to 395 grains per image. All images are captured at a native resolution of 3680 by 2456 pixels under controlled studio lighting with room ventilation.

How the data was collected

We built GrainBench over three months of careful work. Raw grains were first separated into small containers and counted by hand, with every count verified by multiple team members working independently to avoid mistakes. This counting stage took two months and is the foundation the benchmark rests on. Each counted batch was then photographed in a studio using a Nikon digital camera, with images saved as uncompressed TIF files. Between shots, we rearranged the grains to create new spatial distributions, and we captured each arrangement from four camera angles simultaneously. The photography stage took about a month.

Task

Given an image and the grain type, predict the integer count of grains visible in the image. The evaluation is model-agnostic. Any system that can output a count is eligible, including vision-language models, open-vocabulary segmentation models, and any future method a researcher wants to propose.

How to load

from datasets import load_dataset

ds = load_dataset("27Group/GrainBench_Public", split="train")
sample = ds[0]
print(sample["grain_name"], sample["count"])
sample["image"].show()

Leaderboard

Submissions are scored on the private split through a public leaderboard: https://huggingface.co/spaces/27Group/GrainBench-leaderboard

The leaderboard computes MAE, RMSE, relative error, and exact accuracy separately for rice and beans, and reports both per-grain numbers and weighted scores across grains.

Intended uses

GrainBench is intended for developing and evaluating dense counting methods under low-resource, high-density conditions. The public split is deliberately small and cannot be used to train a dedicated counter from scratch. Any system that performs well here must bring most of its capability from pretraining or from a method that does not require many labels.

Limitations

The studio setting is controlled. Real-world grain images from agricultural settings include soil, dust, variable lighting, and occlusion from equipment, and methods that perform well on GrainBench may not transfer directly to field conditions. We view GrainBench as a controlled proxy, not an in-the-wild benchmark. The public split covers only two grain types, rice and beans, selected for their contrasting visual properties. Future versions will add more grain types, harder density regimes, and an optional segmentation track.

License

CC-BY 4.0.

Citation

If you use GrainBench in your work, please cite the paper:

@inproceedings{grainbench2026,
  title={GrainBench: A Dense Counting Benchmark for Vision Models},
  author={Anonymous},
  year={2026}
}

Responsible AI documentation

A full Croissant metadata file including both core fields and Responsible AI extension fields is available in this repository at croissant.json. The RAI file documents the data collection and annotation protocol, known biases and limitations, personal and sensitive information considerations, intended use cases, social impact, and the release and maintenance plan.

Downloads last month
118