posted on 2024-02-07, 10:30authored byLing Tong, Kechen Song, Hongkun Tian, Yi Man, Yunhui Yan, Qinggang MengQinggang Meng
Weakly textured objects are frequently manipulated by industrial and domestic robots, and the most common two types are transparent and reflective objects; however, their unique visual properties present challenges even for advanced grasp detection algorithms. Many existing algorithms heavily rely on depth information, which is not accurately provided by ordinary red-green-blue and depth (RGB-D) sensors for transparent and reflective objects. To overcome this limitation, we propose an innovative solution that uses semantic segmentation to effectively segment weakly textured objects and guide grasp detection. By using only red-green-blue (RGB) images from RGB-D sensors, our segmentation algorithm (RTSegNet) achieves state-of-The-Art performance on the newly proposed TROSD dataset. Importantly, our method enables robots to grasp transparent and reflective objects without requiring retraining of the grasp detection network (which is trained solely on the Cornell dataset). Real-world robot experiments demonstrate the robustness of our approach in grasping commonly encountered weakly textured objects; furthermore, results obtained from various datasets validate the effectiveness and robustness of our segmentation algorithm. Code and video are available at: https://github.com/meiguiz/SG-Grasp.
Funding
Research on 3D Dynamic Detection Theory and Identification Method for Surface Defects of Large High-temperature Structural Parts