著者
細田 慶信
出版者
一般社団法人 日本ロボット学会
雑誌
日本ロボット学会誌 (ISSN:02891824)
巻号頁・発行日
vol.34, no.2, pp.81-85, 2016 (Released:2016-04-15)
参考文献数
10
被引用文献数
1
著者
遠藤 謙 菅原 祥平 北野 智士
出版者
一般社団法人 日本ロボット学会
雑誌
日本ロボット学会誌 (ISSN:02891824)
巻号頁・発行日
vol.32, no.10, pp.855-858, 2014 (Released:2015-01-15)
参考文献数
22
被引用文献数
1
著者
日下 航 尾形 哲也 小嶋 秀樹 高橋 徹 奥乃 博
出版者
一般社団法人 日本ロボット学会
雑誌
日本ロボット学会誌 (ISSN:02891824)
巻号頁・発行日
vol.28, no.4, pp.532-543, 2010 (Released:2012-01-25)
参考文献数
29
被引用文献数
1

We propose a model of evolutionary communication with voice signs and motion signs between two robots. In our model, a robot recognizes other's action through reflecting its self body dynamics by a Multiple Timescale Recurrent Neural Network (MTRNN). Then the robot interprets the action as a sign by its own hierarchical Neural Network (NN). Each of them modifies their interpretation of signs by re-training the NN to adapt the other's interpretation throughout interaction between them. As a result of the experiment, we found that the communication kept evolving through repeating miscommunication and re-adaptation alternately, and induced the emergence of diverse new signs that depend on the robots' body dynamics through the generalization capability of MTRNN.
著者
長谷川 昂宏 山内 悠嗣 山下 隆義 藤吉 弘亘 秋月 秀一 橋本 学 堂前 幸康 川西 亮輔
出版者
一般社団法人 日本ロボット学会
雑誌
日本ロボット学会誌 (ISSN:02891824)
巻号頁・発行日
vol.36, no.5, pp.349-359, 2018 (Released:2018-07-15)
参考文献数
27

Automatization for the picking and placing of a variety of objects stored on shelves is a challenging problem for robotic picking systems in distribution warehouses. Here, object recognition using image processing is especially effective at picking and placing a variety of objects. In this study, we propose an efficient method of object recognition based on object grasping position for picking robots. We use a convolutional neural network (CNN) that can achieve highly accurate object recognition. In typical CNN methods for object recognition, objects are recognized by using an image containing picking targets from which object regions suitable for grasping can be detected. However, these methods increase the computational cost because a large number of weight filters are convoluted with the whole image. The proposed method detects all graspable positions from an image as a first step. In the next step, it classifies an optimal grasping position by feeding an image of the local region at the grasping point to the CNN. By recognizing the grasping positions of the objects first, the computational cost is reduced because of the fewer convolutions of the CNN. Experimental results confirmed that the method can achieve highly accurate object recognition while decreasing the computational cost.