著者
高木 健 山崎 透 石井 抱
出版者
一般社団法人 日本機械学会
雑誌
ロボティクス・メカトロニクス講演会講演概要集 2011 (ISSN:24243124)
巻号頁・発行日
pp._1A1-J04_1-_1A1-J04_4, 2011-05-26 (Released:2017-06-19)

This paper describes an oblique feed screw which can be used as load-sensitive continuously variable transmission (CVT). This CVT consists of a screw, spring, and bearing, and is remarkably simple and compact. Its reduction ratio change automatically in response to the load. We have developed the CVT of 13.8 [g]. We have experimentally verified that it can exert a very strong force of more than 100 [N], and the CVT can increase its reduction ratio form 20 to 45.
著者
飛田 和輝 金森 哉吏 大久保 祐人 小川 博教 杉田 澄雄
出版者
一般社団法人 日本機械学会
雑誌
ロボティクス・メカトロニクス講演会講演概要集 2011 (ISSN:24243124)
巻号頁・発行日
pp._1P1-E08_1-_1P1-E08_4, 2011-05-26 (Released:2017-06-19)

We have proposed a new three dimensional environment recognition method using conical scanning distance measurement for 3-D mobile robot that can walk up and down the stairs. This report describes the method to recognize the stair by applying the conical scanning method. The procedures are as follows. 1) Recognizing the positions and postures of multiple regions on 3D-TOF-camera. 2) Linking neighboring regions by using image processing. 3) Extracting the step based on height information. 4) Calculating the position, direction and width of the stair. We devised the algorithm, implemented, and experiments were carried out.
著者
西川 輝彦 港 隆史 荻野 正樹 浅田 稔
出版者
一般社団法人 日本機械学会
雑誌
ロボティクス・メカトロニクス講演会講演概要集 2011 (ISSN:24243124)
巻号頁・発行日
pp._2P2-M02_1-_2P2-M02_4, 2011-05-26 (Released:2017-06-19)

This paper proposes a hierarchical model which is composed of a slow feature analysis (SFA) network to extract multi-modal representation of a humanoid robot. The experiment with humanoid robot shows that the network can integrate multi-modal information and detect semantic features by the extraction of the slowly varying features from the high-dimensional input sensory signal, and it shows that the multi-modal representation is useful as state representation for reinforcement learning compared with using state representation without the integration of the multi-modal information.