🏠

Publications

HARP: Fast Robot Adaptation via Hand Path Retrieval

HARP: Fast Robot Adaptation via Hand Path Retrieval

Matthew Hong, Anthony Liang, Kevin J Kim, Harshitha Belagavi Rajaprakash, Jesse Thomason, Erdem Biyik, Jesse Zhang

CoRL 2025 Submission Under Review

We present HARP, a simple and efficient method for teaching robots new tasks using just one easy-to-provide human hand demonstration. Unlike prior approaches that rely on task-specific robot demonstrations through teleoperation, HARP leverages task-agnostic play data by retrieving relevant robot behaviors guided by 2D relative hand trajectories. Using a visual tracking pipeline to extract relative motion from both human hands and robot grippers, HARP retrieves trajectories from easy-to-collect robot play data to train a policy for the target task. This approach avoids needing calibrated cameras or detailed hand pose estimation and enables fast adaptation to diverse tasks. Experiments in simulation and on real robots show that HARP enables real-time learning of new tasks and outperforms existing baselines in both efficiency and generalization.