ArtHOI: Taming Foundation Models for Monocular 4D Reconstruction of Hand-Articulated-Object Interactions
Abstract
ArtHOI presents an optimization-based framework that integrates foundation model priors to reconstruct 4D human-articulated-object interactions from single monocular RGB videos using adaptive sampling refinement and multimodal large language model guidance.
Existing hand-object interactions (HOI) methods are largely limited to rigid objects, while 4D reconstruction methods of articulated objects generally require pre-scanning the object or even multi-view videos. It remains an unexplored but significant challenge to reconstruct 4D human-articulated-object interactions from a single monocular RGB video. Fortunately, recent advancements in foundation models present a new opportunity to address this highly ill-posed problem. To this end, we introduce ArtHOI, an optimization-based framework that integrates and refines priors from multiple foundation models. Our key contribution is a suite of novel methodologies designed to resolve the inherent inaccuracies and physical unreality of these priors. In particular, we introduce an Adaptive Sampling Refinement (ASR) method to optimize object's metric scale and pose for grounding its normalized mesh in world space. Furthermore, we propose a Multimodal Large Language Model (MLLM) guided hand-object alignment method, utilizing contact reasoning information as constraints of hand-object mesh composition optimization. To facilitate a comprehensive evaluation, we also contribute two new datasets, ArtHOI-RGBD and ArtHOI-Wild. Extensive experiments validate the robustness and effectiveness of our ArtHOI across diverse objects and interactions. Project: https://arthoi-reconstruction.github.io.
Community
Given a monocular RGB video sequence of hands interacting with an unknown articulated object, ArtHOI reconstructs 4D hand-object interactions without any pre-defined object templates or multi-view scan initialization.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- AGILE: Hand-Object Interaction Reconstruction from Video via Agentic Generation (2026)
- ArtHOI: Articulated Human-Object Interaction Synthesis by 4D Reconstruction from Video Priors (2026)
- GHOST: Fast Category-agnostic Hand-Object Interaction Reconstruction from RGB Videos using Gaussian Splatting (2026)
- WHOLE: World-Grounded Hand-Object Lifted from Egocentric Videos (2026)
- End-to-End Spatial-Temporal Transformer for Real-time 4D HOI Reconstruction (2026)
- TeHOR: Text-Guided 3D Human and Object Reconstruction with Textures (2026)
- FreeArtGS: Articulated Gaussian Splatting Under Free-moving Scenario (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2603.25791 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper