LeHome Challenge β Lakesenberg Submission
Submission package for the ICRA 2026 LeHome Challenge (bi-manual
garment folding on the SO-101 platform).
Generated on 2026-05-01.
β‘ TL;DR for evaluators (3 commands)
# 0. From inside an already-installed LeHome Challenge env
# (uv venv + isaaclab + lehome package; see official docs/installation.md).
cd lehome-challenge
# 1. Pull this submission and install the dp_b1k plugin (10 s + 1 GB download).
git clone https://huggingface.co/Lakesenberg/lehome-challenge-submission submission_pkg
uv pip install -e ./submission_pkg/source/lerobot_policy_dp_b1k
# 2. Run our recommended policy (Solution F = dp_b1k) on top_long.
python -m scripts.eval \
--policy_type lerobot \
--policy_path submission_pkg/checkpoints/dp_b1k_four_types/pretrained_model \
--dataset_root Datasets/example/top_long \
--garment_type top_long \
--num_episodes 12 \
--enable_cameras --device cpu
Then loop --garment_type over top_long / top_short / pant_long / pant_short
to obtain the four-category success rates.
π‘ Use
--policy_type lerobotfor both submitted checkpoints. The customdp_b1kpolicy is registered through LeRobot's standard third-party plugin mechanism (lerobot_policy_*namespace), so the eval script picks it up automatically once the plugin ispip install-ed.
π¦ Repo contents
checkpoints/
act_four_types/pretrained_model/ # Baseline (LeRobot ACT, 30k steps)
dp_b1k_four_types/pretrained_model/ # Solution F (DP + B1K-style, 30k steps)
source/
lerobot_policy_dp_b1k/ # Custom LeRobot plugin (registers `dp_b1k`)
configs/ # Training configs (YAML)
rollout_results.txt # Local rollout summary (see env notice)
README.md # This file
π€ Submitted policies
Two policies are provided. Solution F (dp_b1k) is our primary entry;
ACT is included as a baseline reference.
F. dp_b1k β DiffusionPolicy + BEHAVIOR-1K-style training/inference (PRIMARY)
Custom LeRobot plugin built on top of lerobot.policies.diffusion, with three
additions ported from the BEHAVIOR-1K winning solution (originally Pi0.5/openpi):
- Correlated action noise β diffusion noise sampled with the empirical action-covariance Cholesky factor instead of i.i.d. Gaussian. Produces smoother action chunks that match the temporal statistics of human teleoperation.
- Soft inpainting at inference β the head of every action chunk is softly constrained to continue the previously executed actions, with covariance correction so the inpainted prefix doesn't break the noise structure. Removes the inter-chunk jitter that plain action-chunking suffers from.
- Optional depth branch β
DiffusionDepthEncodershares the RGB ResNet trunk with a separate single-channel stem; depth features are concatenated to the global conditioning vector. Disabled in the released checkpoint because thefour_types_mergeddataset does not shipobservation.top_depth.
Plugin entry-point: lerobot_policy_dp_b1k.DpB1kPolicy, auto-registered as
type dp_b1k via the lerobot_policy_ plugin convention.
A. ACT (baseline)
Stock LeRobot ACT trained on the same four_types_merged dataset (1000
episodes, 12-D bi-arm state/action, RGB only). Provided for reproducibility
of the comparison reported below.
π§ͺ How to evaluate (full reproduction)
1. Set up the official LeHome Challenge environment
Follow the official docs/installation.md:
git clone https://github.com/lehome-official/lehome-challenge.git
cd lehome-challenge
uv sync
cd third_party && git clone https://github.com/lehome-official/IsaacLab.git && cd ..
source .venv/bin/activate
./third_party/IsaacLab/isaaclab.sh -i none
uv pip install -e ./source/lehome
# Download assets + example dataset (see docs/datasets.md)
2. Pull this submission and install the dp_b1k plugin
git clone https://huggingface.co/Lakesenberg/lehome-challenge-submission submission_pkg
uv pip install -e ./submission_pkg/source/lerobot_policy_dp_b1k
3. Evaluate both policies on all four categories
GARMENTS="top_long top_short pant_long pant_short"
POLICIES="act_four_types dp_b1k_four_types"
for G in $GARMENTS; do
for P in $POLICIES; do
python -m scripts.eval \
--policy_type lerobot \
--policy_path submission_pkg/checkpoints/${P}/pretrained_model \
--dataset_root Datasets/example/${G} \
--garment_type ${G} \
--num_episodes 12 \
--enable_cameras --device cpu
done
done
--device cpu runs PhysX on CPU and renders cameras via GPU/Vulkan, which
is the configuration recommended in docs/policy_eval.md.
Expected output
Each invocation prints per-episode success / failure logs and finishes with a
Success Rate: X/12 line for the given category Γ policy combination.
π Local rollout note
rollout_results.txt reports N/A (env) for every category because the
submitter's host (NVIDIA H200 + driver 550.163.01 cloud instance) cannot
start Isaac Sim 5.1: the Vulkan/RTX render pipeline fails with
VkResult: ERROR_DEVICE_LOST before any policy step is executed. We
installed the full graphics stack (libnvidia-gl-550, libvulkan1,
mesa-vulkan-drivers) and confirmed the H200 is visible to Vulkan 1.3.277,
but the RTX ray-tracing pipeline initialization still fails on this driver
version (a known mismatch fixed by NVIDIA driver β₯ 560 for Hopper GPUs).
What we did verify on the same host:
- Both checkpoints load cleanly into LeRobot 0.4.2.
lerobot_policy_dp_b1kregisters andscripts/smoke_test_dp_b1k.pypasses both the forward and inference paths.- Training completed to 30 000 steps for both ACT and dp_b1k.
Reproduction in the LeHome reference Isaac Lab 5.1 environment is expected to work normally.
π§© Acknowledgments
- LeRobot β imitation-learning training stack (ACT and Diffusion Policy)
- Isaac Lab / Isaac Sim β simulation environment
- BEHAVIOR-1K winning solution (Pi0.5) β for the correlated-noise / soft-inpainting recipe that motivates Solution F