著者
加藤 徹也 青木 滉一郎 菅原 徹 村上 智加 宮崎 正己
出版者
日本感性工学会
雑誌
日本感性工学会論文誌 (ISSN:18840833)
巻号頁・発行日
vol.14, no.3, pp.419-424, 2015 (Released:2015-08-28)
参考文献数
15
被引用文献数
2

We produced the average face by combining forty-one facial images of female college students in order to investigate how the facial impression changes depending on distance between eyes and eyebrows. Furthermore, we produced ten experiment samples by shortening or widening distance between them of the average face in five steps respectively. These images were presented individually to ninety college students, and they were asked to evaluate them using twenty adjective pairs. Consequently, the average face received high evaluations about likability-related impressions, while the faces with shorter distance between eyes and eyebrow got high evaluations about activity-related impressions. Moreover, two principal component (“degree of refinement” and “femininity”) were extracted as a result of principal component analysis to evaluation scores. It was found that degree of refinement was likely to be affected by the perceived size of the eyes, and femininity was defined by distance between eyes and eyebrow.
著者
加藤 徹也 青木 滉一郎 菅原 徹 村上 智加 宮崎 正己
出版者
日本感性工学会
雑誌
日本感性工学会論文誌 (ISSN:18840833)
巻号頁・発行日
vol.14, no.3, pp.419-424, 2015
被引用文献数
2

We produced the average face by combining forty-one facial images of female college students in order to investigate how the facial impression changes depending on distance between eyes and eyebrows. Furthermore, we produced ten experiment samples by shortening or widening distance between them of the average face in five steps respectively. These images were presented individually to ninety college students, and they were asked to evaluate them using twenty adjective pairs. Consequently, the average face received high evaluations about likability-related impressions, while the faces with shorter distance between eyes and eyebrow got high evaluations about activity-related impressions. Moreover, two principal component (“degree of refinement” and “femininity”) were extracted as a result of principal component analysis to evaluation scores. It was found that degree of refinement was likely to be affected by the perceived size of the eyes, and femininity was defined by distance between eyes and eyebrow.
著者
加藤 徹也 本田 敦也 Kato Tetsuya 本田 敦也 ホンダ アツヤ Honda Atsuya
出版者
千葉大学教育学部
雑誌
千葉大学教育学部研究紀要 (ISSN:13482084)
巻号頁・発行日
vol.58, pp.387-395, 2010-03
被引用文献数
1

ハイスピード・ビデオカメラの市販により,運動する物体を撮影して力学法則を確かめるような物理教育の機会が増えている。一般に動画は容量の大きなデータで,その保存・転送・表示における技術上の問題があり,解像度が低い。このため,精度よく物体位置を取得するには限界がある。またハイスピード撮影のデータはフレーム数が多く,効率よい画像処理プログラミングが必要である。われわれは,力学的エネルギー保存の法則の成立を演示することを目的として,512×384pixels の解像度で1秒間300フレームの動画撮影を行い,振り子運動する物体の位置をOpenCVによるプログラミングを通して取得した。画像上で直径81pixel の物体位置を±0.03pixelの精度で得て,水平方向±58pixel,鉛直方向1.1pixelの弧を描く運動から,運動エネルギー・位置エネルギーを十分な精度で決定できた。Recently, the chance to use commercially supplied high-speed video cameras has been increased in physics education to confirm kinetic laws of the motion of bodies. The data files of movies are generally so large in capacity that there are the technical issues to store, transfer, and display the data. In consequence, the resolution of the movie is not enough to determine the position of moving bodies precisely. In addition, the high-speed movies consist of a large number of frames, so that we need to be computer-assisted and to use high-efficient Computer-Vision (shortly, CV) software of the image processing. Thus in order to assert the mechanical-energy conservation law in demonstrational experiments, we recorded motion of some pendulums in the videos with 512x384 pixels in resolution and 300 frames per second in frame rate, and obtained the position of the bodies with a handmade software using OpenCV libraries. In a case of a pendulum, of which the body has the size of 81 pixel in diameter on the image and sweeps in an arc with the horizontal amplitude of ±58 pixels and with the quite small vertical range of 1.1 pixel, the time-dependence of kinetic and potential energies showed the expected compensating behaviors with acceptable errors from the results with the reading positional accuracy of about 0.03 pixel.