Beyond the Majority: Long-tail Imitation Learning for Robotic Manipulation

Junhong Zhu1*, Ji Zhang2*†, Jingkuan Song3, Lianli Gao1‑, Heng Tao Shen3
1University of Electronic Science and Technology of China
2Southwest Jiaotong University 3Tongji University
*Equal Contribution †Project lead ‑Corresponding author

πŸŽ‰πŸŽ‰ Our paper has been accepted by ICRA'2026! πŸŽ‰πŸŽ‰

Abstract

While generalist robot policies hold significant promise for learning diverse manipulation skills through imitation, their performance is often hindered by the long-tail distribution of training demonstrations. Policies learned on such data, which is heavily skewed towards a few data-rich head tasks, frequently exhibit poor generalization when confronted with the vast number of data-scarce tail tasks. In this work, we conduct a comprehensive analysis of the pervasive long-tail challenge inherent in policy learning. Our analysis begins by demonstrating the inefficacy of conventional long-tail learning strategies (e.g., re-sampling) for improving the policy's performance on tail tasks. We then uncover the underlying mechanism for this failure, revealing that data scarcity on tail tasks directly impairs the policy's spatial reasoning capability. To overcome this, we introduce Approaching-Phase Augmentation (APA), a simple yet effective scheme that transfers knowledge from data-rich head tasks to data-scarce tail tasks without requiring external demonstrations. Extensive experiments in both simulation and real-world manipulation tasks demonstrate the effectiveness of APA.

BibTeX

@inproceedings{zhu2026beyond,
  title={Beyond the Majority: Long-tail Imitation Learning for Robotic Manipulation},
  author={Zhu, Junhong and Zhang, Ji and Song, Jingkuan and Gao, Lianli and Shen, Heng Tao},
  booktitle={IEEE International Conference on Robotics and Automation (ICRA)},
  year={2026}
}