Two-Stage Transfer Learning for Heterogeneous Robot Detection and 3D Joint Position Estimation in a 2D Camera Image Using CNN
Publikation aus Robotics
Industrie-Robotersystem-Technologien
Justinas Miseikis, Inka Brijacak , Saeed Yahyanejad , Kyrre Glette, Ole Jakob Elle, Jim Torresen
2019 International Conference on Robotics and Automation (ICRA), IEEE, pp. 8883-8889 , 8/2019
Collaborative robots are becoming more commonon factory floors as well as regular environments, however,their safety still is not a fully solved issue. Collision detectiondoes not always perform as expected and collision avoidance isstill an active research area. Collision avoidance works well forfixed robot-camera setups, however, if they are shifted around,Eye-to-Hand calibration becomes invalid making it difficultto accurately run many of the existing collision avoidancealgorithms. We approach the problem by presenting a stand-alone system capable of detecting the robot and estimatingits position, including individual joints, by using a simple 2Dcolour image as an input, where no Eye-to-Hand calibrationis needed. As an extension of previous work, a two-stagetransfer learning approach is used to re-train a multi-objectiveconvolutional neural network (CNN) to allow it to be used withheterogeneous robot arms. Our method is capable of detectingthe robot in real-time and new robot types can be addedby having significantly smaller training datasets compared tothe requirements of a fully trained network. We present datacollection approach, the structure of the multi-objective CNN,the two-stage transfer learning training and test results by usingreal robots from Universal Robots, Kuka, and Franka Emika.Eventually, we analyse possible application areas of our methodtogether with the possible improvements.
Keywords: Robot kinematics, Collision avoidance, Cameras, Robot vision systems, Calibration