Datasets:
The dataset viewer should be available soon. Please retry later.
OpenGitHub Meta
What is it?
The full development metadata of 8 public GitHub repositories, fetched from the GitHub REST API and GraphQL API, converted to Parquet and hosted here for easy access.
Right now the archive has 5.6M rows across 8 tables in 504.7 MB of Zstd-compressed Parquet. Every issue, pull request, comment, code review, timeline event, file change, and CI status check is stored as a separate table you can load individually or query together.
This is the companion to OpenGitHub, which mirrors the real-time GitHub event stream via GH Archive. That dataset tells you what happened across all of GitHub. This one gives you the full picture for specific repos: complete issue threads, full PR review conversations, the state machine from open to close.
People use it for:
- Code review research with inline comments attached to specific diff lines
- Project health metrics like merge rates, review turnaround, label usage
- Issue triage and classification with full text, labels, and timeline
- Software engineering process mining from timeline event sequences
Last updated: 2026-03-31 12:52 UTC.
Repositories
| Repository | Issues | PRs | Comments | Reviews | Timeline | Total | Last Updated |
|---|---|---|---|---|---|---|---|
| facebook/react | 33.6K | 19.2K | 170.6K | 20.1K | 248.9K | 858.2K | 2026-03-31 05:06 UTC |
| golang/go | 75.8K | 4.9K | 535.5K | 323 | 248.5K | 936.8K | 2026-03-31 05:17 UTC |
| mdn/content | 41.5K | 31.5K | 157.3K | 18.4K | 10.7K | 408.8K | 2026-03-31 05:50 UTC |
| python/cpython | 145.5K | 69.8K | 863.2K | 149.4K | 12.6K | 1.9M | 2026-03-31 05:36 UTC |
| rust-lang/rust | 153.7K | 92.2K | 0 | 88.4K | 10.0K | 1.3M | 2026-03-31 05:48 UTC |
| swiftlang/swift | 84.3K | 37.3K | 0 | 0 | 10.0K | 131.6K | 2026-03-30 17:41 UTC |
| vuejs/core | 12.0K | 6.1K | 35.6K | 4.7K | 10.0K | 89.8K | 2026-03-31 05:37 UTC |
| vuejs/docs | 3.3K | 2.2K | 7.0K | 2.7K | 10.0K | 40.4K | 2026-03-30 07:52 UTC |
How to download and use this dataset
Data lives at data/{table}/{owner}/{repo}/0.parquet. Load a single table, a single repo, or everything at once. Standard Hugging Face Parquet layout, works with DuckDB, datasets, pandas, and huggingface_hub out of the box.
Using DuckDB
DuckDB reads Parquet directly from Hugging Face, no download step needed. Save any query below as a .sql file and run it with duckdb < query.sql.
-- Top issue authors across all repos
SELECT
author,
COUNT(*) as issue_count,
COUNT(*) FILTER (WHERE state = 'open') as open,
COUNT(*) FILTER (WHERE state = 'closed') as closed
FROM read_parquet('hf://datasets/open-index/open-github-meta/data/issues/**/0.parquet')
WHERE is_pull_request = false
GROUP BY author
ORDER BY issue_count DESC
LIMIT 20;
-- PR merge rate by repo
SELECT
split_part(filename, '/', 8) || '/' || split_part(filename, '/', 9) as repo,
COUNT(*) as total_prs,
COUNT(*) FILTER (WHERE merged) as merged,
ROUND(COUNT(*) FILTER (WHERE merged) * 100.0 / COUNT(*), 1) as merge_pct
FROM read_parquet('hf://datasets/open-index/open-github-meta/data/pull_requests/**/0.parquet', filename=true)
GROUP BY repo
ORDER BY total_prs DESC;
-- Most reviewed PRs by number of review submissions
SELECT
r.pr_number,
COUNT(*) as review_count,
COUNT(*) FILTER (WHERE r.state = 'APPROVED') as approvals,
COUNT(*) FILTER (WHERE r.state = 'CHANGES_REQUESTED') as changes_requested
FROM read_parquet('hf://datasets/open-index/open-github-meta/data/reviews/**/0.parquet') r
GROUP BY r.pr_number
ORDER BY review_count DESC
LIMIT 20;
-- Label activity over time (monthly)
SELECT
date_trunc('month', created_at) as month,
COUNT(*) as label_events
FROM read_parquet('hf://datasets/open-index/open-github-meta/data/timeline_events/**/0.parquet')
WHERE event_type = 'LabeledEvent'
GROUP BY month
ORDER BY month;
-- Largest PRs by lines changed
SELECT
number,
additions,
deletions,
changed_files,
additions + deletions as total_lines
FROM read_parquet('hf://datasets/open-index/open-github-meta/data/pull_requests/**/0.parquet')
ORDER BY total_lines DESC
LIMIT 20;
Using Python (uv run)
These scripts use PEP 723 inline metadata. Save as a .py file and run with uv run script.py. No virtualenv or pip install needed.
Stream issues:
# /// script
# requires-python = ">=3.11"
# dependencies = ["datasets"]
# ///
from datasets import load_dataset
ds = load_dataset("open-index/open-github-meta", "issues", streaming=True)
for i, row in enumerate(ds["train"]):
print(f"#{row['number']}: [{row['state']}] {row['title']} (by {row['author']})")
if i >= 19:
break
Load a specific repo:
# /// script
# requires-python = ">=3.11"
# dependencies = ["datasets"]
# ///
from datasets import load_dataset
ds = load_dataset(
"open-index/open-github-meta",
"pull_requests",
data_files="data/pull_requests/facebook/react/0.parquet",
)
df = ds["train"].to_pandas()
print(f"Loaded {len(df)} pull requests")
print(f"Merged: {df['merged'].sum()} ({df['merged'].mean()*100:.1f}%)")
print(f"\nTop 10 by lines changed:")
df["total_lines"] = df["additions"] + df["deletions"]
print(df.nlargest(10, "total_lines")[["number", "additions", "deletions", "total_lines"]].to_string(index=False))
Download files:
# /// script
# requires-python = ">=3.11"
# dependencies = ["huggingface-hub"]
# ///
from huggingface_hub import snapshot_download
# Download only issues
snapshot_download(
"open-index/open-github-meta",
repo_type="dataset",
local_dir="./open-github-meta/",
allow_patterns="data/issues/**/*.parquet",
)
print("Downloaded issues parquet files to ./open-github-meta/")
For faster downloads, install pip install huggingface_hub[hf_transfer] and set HF_HUB_ENABLE_HF_TRANSFER=1.
Dataset structure
issues
Both issues and PRs live in this table (check is_pull_request). Join with pull_requests on number for PR-specific fields like merge status and diff stats.
| Column | Type | Description |
|---|---|---|
number |
int32 | Issue/PR number (primary key) |
node_id |
string | GitHub GraphQL node ID |
is_pull_request |
bool | True if this is a PR |
title |
string | Title |
body |
string | Full body text in Markdown |
state |
string | open or closed |
state_reason |
string | completed, not_planned, or reopened |
author |
string | Username of the creator |
created_at |
timestamp | When opened |
updated_at |
timestamp | Last activity |
closed_at |
timestamp | When closed (null if open) |
labels |
string (JSON) | Array of label names |
assignees |
string (JSON) | Array of assignee usernames |
milestone_title |
string | Milestone name |
milestone_number |
int32 | Milestone number |
reactions |
string (JSON) | Reaction counts ({"+1": 5, "heart": 2}) |
comment_count |
int32 | Number of comments |
locked |
bool | Whether the conversation is locked |
lock_reason |
string | Lock reason |
pull_requests
PR-specific fields. Join with issues on number for title, body, labels, and other shared fields.
| Column | Type | Description |
|---|---|---|
number |
int32 | PR number (matches issues.number) |
merged |
bool | Whether the PR was merged |
merged_at |
timestamp | When merged |
merged_by |
string | Username who merged |
merge_commit_sha |
string | Merge commit SHA |
base_ref |
string | Target branch (e.g. main) |
head_ref |
string | Source branch |
head_sha |
string | Head commit SHA |
additions |
int32 | Lines added |
deletions |
int32 | Lines deleted |
changed_files |
int32 | Number of files changed |
draft |
bool | Whether the PR is a draft |
maintainer_can_modify |
bool | Whether maintainers can push to the head branch |
comments
Conversation comments on issues and PRs. These are the threaded discussion comments, not inline code review comments (those are in review_comments).
| Column | Type | Description |
|---|---|---|
id |
int64 | Comment ID (primary key) |
issue_number |
int32 | Parent issue/PR number |
author |
string | Username |
body |
string | Comment body in Markdown |
created_at |
timestamp | When posted |
updated_at |
timestamp | Last edit |
reactions |
string (JSON) | Reaction counts |
author_association |
string | OWNER, MEMBER, CONTRIBUTOR, NONE, etc. |
review_comments
Inline code review comments on PR diffs. Each comment is attached to a specific file and line in the diff.
| Column | Type | Description |
|---|---|---|
id |
int64 | Comment ID (primary key) |
pr_number |
int32 | Parent PR number |
review_id |
int64 | Parent review ID |
author |
string | Reviewer username |
body |
string | Comment body in Markdown |
path |
string | File path in the diff |
line |
int32 | Line number |
side |
string | LEFT (old code) or RIGHT (new code) |
diff_hunk |
string | Surrounding diff context |
created_at |
timestamp | When posted |
updated_at |
timestamp | Last edit |
in_reply_to_id |
int64 | Parent comment ID (for threaded replies) |
reviews
PR review decisions. One row per review action on a PR.
| Column | Type | Description |
|---|---|---|
id |
int64 | Review ID (primary key) |
pr_number |
int32 | Parent PR number |
author |
string | Reviewer username |
state |
string | APPROVED, CHANGES_REQUESTED, COMMENTED, DISMISSED |
body |
string | Review summary in Markdown |
submitted_at |
timestamp | When submitted |
commit_id |
string | Commit SHA that was reviewed |
timeline_events
The full lifecycle of every issue and PR. Every label change, assignment, cross-reference, merge, force-push, lock, and other state transition.
| Column | Type | Description |
|---|---|---|
id |
string | Event ID (node_id or synthesized) |
issue_number |
int32 | Parent issue/PR number |
event_type |
string | Event type (see below) |
actor |
string | Username who triggered the event |
created_at |
timestamp | When it happened |
database_id |
int64 | GitHub database ID for the event |
label_name |
string | Label name (labeled, unlabeled) |
label_color |
string | Label hex color |
state_reason |
string | Close reason: COMPLETED, NOT_PLANNED (closed) |
assignee_login |
string | Username assigned/unassigned (assigned, unassigned) |
milestone_title |
string | Milestone name (milestoned, demilestoned) |
title_from |
string | Previous title before rename (renamed) |
title_to |
string | New title after rename (renamed) |
ref_type |
string | Referenced item type: Issue or PullRequest (cross-referenced, referenced) |
ref_number |
int32 | Referenced issue/PR number |
ref_url |
string | URL of the referenced item |
will_close |
bool | Whether the reference will close this issue |
lock_reason |
string | Lock reason (locked) |
data |
string (JSON) | Remaining event-specific payload (common fields stripped) |
Event types: labeled, unlabeled, closed, reopened, assigned, unassigned, milestoned, demilestoned, renamed, cross-referenced, referenced, locked, unlocked, pinned, merged, review_requested, head_ref_force_pushed, head_ref_deleted, ready_for_review, convert_to_draft, and more.
Common fields (actor, created_at, database_id and extracted columns above) are stored in dedicated columns and removed from data to reduce storage. The data field contains only remaining event-specific payload. See the GitHub GraphQL timeline items documentation for the full type catalog.
pr_files
Every file touched by each pull request, with per-file diff statistics.
| Column | Type | Description |
|---|---|---|
pr_number |
int32 | Parent PR number |
path |
string | File path |
additions |
int32 | Lines added |
deletions |
int32 | Lines deleted |
status |
string | added, removed, modified, renamed |
previous_filename |
string | Original path (for renames) |
commit_statuses
CI/CD status checks and GitHub Actions results for each commit.
| Column | Type | Description |
|---|---|---|
sha |
string | Commit SHA |
context |
string | Check name (e.g. ci/circleci, check:build) |
state |
string | success, failure, pending, error |
description |
string | Status description |
target_url |
string | Link to CI details |
created_at |
timestamp | When reported |
Dataset statistics
| Table | Rows | Description |
|---|---|---|
issues |
549.7K | Issues and pull requests (shared metadata) |
pull_requests |
263.2K | PR-specific fields (merge status, diffs, refs) |
comments |
1.5M | Conversation comments on issues and PRs |
review_comments |
265.8K | Inline code review comments on PR diffs |
reviews |
284.1K | PR review decisions |
timeline_events |
560.8K | Activity timeline (labels, closes, merges, assignments) |
pr_files |
2.0M | Files changed in each pull request |
commit_statuses |
164.0K | CI/CD status checks per commit |
| Total | 5.6M |
How it's built
The sync pipeline uses both GitHub APIs. The REST API handles bulk listing: issues, comments, and review comments are fetched repo-wide with since-based incremental pagination and parallel page fetching across multiple tokens. The GraphQL API handles per-item detail: one query grabs reviews, timeline events, file changes, and commit statuses in a single round trip, with automatic REST fallback for PRs with more than 100 files or reviews.
Multiple GitHub Personal Access Tokens rotate round-robin to spread rate limit load. The pipeline is fully incremental and idempotent: re-running picks up only what changed since the last sync.
Everything lands in per-repo DuckDB files first, then gets exported to Parquet with Zstd compression for publishing here. No filtering, deduplication, or content changes. Bot activity, automated PRs, CI noise, Dependabot upgrades, all of it is preserved, because that's how repos actually work.
Known limitations
- Point-in-time snapshot. Data reflects the state at the last sync, not real-time. Incremental updates capture everything that changed since the previous sync.
- Bot activity included. Comments and PRs from bots (Dependabot, Renovate, GitHub Actions, etc.) are included without filtering. This is intentional. Filter on
authorif you want humans only. - JSON columns.
labels,assignees,reactions, anddatacontain JSON strings. Usejson_extract()in DuckDB orjson.loads()in Python. - Body text can be large. Issue and comment bodies contain full Markdown, sometimes with embedded images. Project only the columns you need for memory-constrained workloads.
- Timeline data varies by event type. The
datafield intimeline_eventscontains the raw event payload as JSON. The schema depends onevent_type.
Personal and sensitive information
Usernames, user IDs, and author associations are included as they appear in the GitHub API. All data was already publicly accessible on GitHub. Email addresses do not appear in this dataset (they exist only in git commit objects, which are in the separate code archive, not here). No private repository data is present.
License
Released under the Open Data Commons Attribution License (ODC-By) v1.0. The underlying data is sourced from GitHub's public API. GitHub's Terms of Service apply to the original data.
Thanks
All the data here comes from GitHub's public REST API and GraphQL API. We are grateful to the open-source maintainers and contributors whose work is represented in these tables.
- OpenGitHub, our companion dataset covering the full GitHub event stream via GH Archive by Ilya Grigorik
- Built with DuckDB (Go driver), Apache Parquet (Zstd compression), published via Hugging Face Hub
Questions, feedback, or issues? Open a discussion on the Community tab.
- Downloads last month
- 512