著者
岡野 昭伍 牧野 達仁 出村 公成
出版者
一般社団法人 日本ロボット学会
雑誌
日本ロボット学会誌 (ISSN:02891824)
巻号頁・発行日
vol.40, no.1, pp.71-82, 2022 (Released:2022-01-17)
参考文献数
27

In recent years, many deep learning methods have been proposed, but the annotation process for creating datasets is a time-consuming and labor-intensive task. In this study, we propose a fluorescent texture to generate a 2-3D dataset that can be used in visible light. The fluorescent texture uses fluorescent paint, which is transparent under visible light but can be recognized under UV light. Target object can be made measurable by applying the texture. The fluorescent texture is an extensible method and can change the annotation data depending on the representation of the texture. In this study AR markers and grid textures are given to target objects using fluorescent textures. By applying existing methods such as marker recognition algorithms and stereo vision algorithms to the fluorescent texture, we can automatically annotate 3D information such as object position, orientation, and point cloud, as well as image region for semantic segmentation. Fluorescent textures can be applied to not only general objects but also objects that are difficult to recognize. The accuracies of the point cloud were as follows, general objects 1.7[mm], transparent containers 1.9[mm], and metal plates 1.7[mm].