著者
泉田 啓
出版者
The Robotics Society of Japan
雑誌
日本ロボット学会誌 (ISSN:02891824)
巻号頁・発行日
vol.23, no.1, pp.41-45, 2005-01-15 (Released:2010-08-25)
参考文献数
20
被引用文献数
3
著者
山北 昌毅 橋本 実 山田 毅
出版者
The Robotics Society of Japan
雑誌
日本ロボット学会誌 (ISSN:02891824)
巻号頁・発行日
vol.22, no.1, pp.61-67, 2004-01-15 (Released:2010-08-25)
参考文献数
6
被引用文献数
7 13

A snake robot is a typical example of robots with redundant degree of freedom. Using input-output linearization for only movement of the head of a robot, we can control the head speed as a desired one, but eventually the robot will come to a singular posture like a straight line. In order to overcome the problem, a control with dynamic manipulability was proposed. In this paper, we propose a control technique in which a physical index of horizontal constraint force is used, and a control law for head configuration. By using these, the winding pattern with which the robot can avoid the singular posture is generated automatically and head converges to the target.
著者
吉川 恒夫
出版者
The Robotics Society of Japan
雑誌
日本ロボット学会誌 (ISSN:02891824)
巻号頁・発行日
vol.13, no.7, pp.950-957, 1995-10-15 (Released:2010-08-10)
参考文献数
19
被引用文献数
3 9 2
著者
細田 慶信
出版者
一般社団法人 日本ロボット学会
雑誌
日本ロボット学会誌 (ISSN:02891824)
巻号頁・発行日
vol.34, no.2, pp.81-85, 2016 (Released:2016-04-15)
参考文献数
10
被引用文献数
1
著者
遠藤 謙 菅原 祥平 北野 智士
出版者
一般社団法人 日本ロボット学会
雑誌
日本ロボット学会誌 (ISSN:02891824)
巻号頁・発行日
vol.32, no.10, pp.855-858, 2014 (Released:2015-01-15)
参考文献数
22
被引用文献数
1
著者
日下 航 尾形 哲也 小嶋 秀樹 高橋 徹 奥乃 博
出版者
一般社団法人 日本ロボット学会
雑誌
日本ロボット学会誌 (ISSN:02891824)
巻号頁・発行日
vol.28, no.4, pp.532-543, 2010 (Released:2012-01-25)
参考文献数
29
被引用文献数
1

We propose a model of evolutionary communication with voice signs and motion signs between two robots. In our model, a robot recognizes other's action through reflecting its self body dynamics by a Multiple Timescale Recurrent Neural Network (MTRNN). Then the robot interprets the action as a sign by its own hierarchical Neural Network (NN). Each of them modifies their interpretation of signs by re-training the NN to adapt the other's interpretation throughout interaction between them. As a result of the experiment, we found that the communication kept evolving through repeating miscommunication and re-adaptation alternately, and induced the emergence of diverse new signs that depend on the robots' body dynamics through the generalization capability of MTRNN.
著者
細田 耕 浅田 稔
出版者
The Robotics Society of Japan
雑誌
日本ロボット学会誌 (ISSN:02891824)
巻号頁・発行日
vol.14, no.2, pp.313-319, 1996-03-15 (Released:2010-08-10)
参考文献数
12
被引用文献数
3 4

This paper describes an adaptive visual servoing controller consisting of an on-line estimator and a feedback/feedforward controller for uncalibrated camera-manipulator systems. The estimator does not need a priori knowledge on the kinematic structure nor parameters of the camera-manipulator system, such as camera and link parameters. The controller consists of feedforward and feedback terms to make the image features converge to the desired trajectories, by using the estimated results. Some experimental results demonstrate the validity of the proposed estimator and controller.
著者
長谷川 昂宏 山内 悠嗣 山下 隆義 藤吉 弘亘 秋月 秀一 橋本 学 堂前 幸康 川西 亮輔
出版者
一般社団法人 日本ロボット学会
雑誌
日本ロボット学会誌 (ISSN:02891824)
巻号頁・発行日
vol.36, no.5, pp.349-359, 2018 (Released:2018-07-15)
参考文献数
27

Automatization for the picking and placing of a variety of objects stored on shelves is a challenging problem for robotic picking systems in distribution warehouses. Here, object recognition using image processing is especially effective at picking and placing a variety of objects. In this study, we propose an efficient method of object recognition based on object grasping position for picking robots. We use a convolutional neural network (CNN) that can achieve highly accurate object recognition. In typical CNN methods for object recognition, objects are recognized by using an image containing picking targets from which object regions suitable for grasping can be detected. However, these methods increase the computational cost because a large number of weight filters are convoluted with the whole image. The proposed method detects all graspable positions from an image as a first step. In the next step, it classifies an optimal grasping position by feeding an image of the local region at the grasping point to the CNN. By recognizing the grasping positions of the objects first, the computational cost is reduced because of the fewer convolutions of the CNN. Experimental results confirmed that the method can achieve highly accurate object recognition while decreasing the computational cost.