Behavior cloning has shown promise for robot manipulation, but real-world demonstrations are costly to acquire at scale. While simulated data offers a scalable alternative, particularly with advances in automated demonstration generation, transferring policies to the real world is hampered by various simulation and real domain gaps. In this work, we propose a unified sim-and-real co-training framework for learning generalizable manipulation policies that primarily leverages simulation and only requires a few real-world demonstrations. Central to our approach is learning a domain-invariant, task-relevant feature space. Our key insight is that aligning the joint distributions of observations and their corresponding actions across domains provides a richer signal than aligning observations (marginals) alone. We achieve this by embedding an Optimal Transport (OT)-inspired loss within the co-training framework, and extend this to an Unbalanced OT framework to handle the imbalance between abundant simulation data and limited real-world examples. We validate our method on challenging manipulation tasks, showing it can leverage abundant simulation data to achieve up to a 30% improvement in the real-world success rate and even generalize to scenarios seen only in simulation.
Ours: 0.77
Co-training: 0.73
MMD: 0.54
Target-only: 0.51
Ours: 0.80
Co-training: 0.70
MMD: 0.46
Target-only: 0.44
Ours: 0.68
Co-training: 0.62
MMD: 0.46
Target-only: 0.38
Ours: 0.63
Co-training: 0.51
MMD: 0.48
Target-only: 0.00
Ours: 0.11
Co-training: 0.06
MMD: 0.07
Target-only: 0.00
Ours: 0.59
Co-training: 0.47
MMD: 0.40
Target-only: 0.00
placeholder 7