著者
原 彰良 渡邉 理翔 古川 正紘 前田 太郎
出版者
特定非営利活動法人 日本バーチャルリアリティ学会
雑誌
日本バーチャルリアリティ学会論文誌 (ISSN:1344011X)
巻号頁・発行日
vol.27, no.3, pp.210-217, 2022-09-30 (Released:2022-09-30)
参考文献数
19

In previous studies, GVS has achieved vestibular sensory presentation by applying a penetrating current to the vestibular organs in the direction in which vestibular sensation is desired to be presented. However, currents in the head have taken only limited current pathways due to impedance gradients, and as a result, the direction of vestibular sensation presentation has been limited. In this paper, we propose a new method of presenting vestibular sensory based on current pathways, which is time-division multiplexing of multiple basis vectors. The method extends the direction of vestibular sensory presentation, which has been limited by current pathways. By using the method, simultaneous energization of multiple current sources and current pathways can be avoided, and it is expected that mutual interference between them can be avoided. In addition, we succeeded in separating anisotropy of body sway and of vestibular sensitivity to GVS, which have not been discussed in vestibular sensory presentation display with GVS.
著者
小松 英海 村田 佳代子 妹尾 武治
出版者
特定非営利活動法人 日本バーチャルリアリティ学会
雑誌
日本バーチャルリアリティ学会論文誌 (ISSN:1344011X)
巻号頁・発行日
vol.22, no.3, pp.431-434, 2017

<p>Vection has been studied in various methods. For example, experimental Psychological methods have a long history and brain science also afforded us with the knowledge of vection related brain areas. In this study, we newly introduce Phenomenological approach to vection research. We wanted to reveal the usefulness and importance of this method into vection research. Phenomenological approach will tell us a lot of things in vection. The details and examples of showing the power of Phenomenological approach were clearly presented in the body of this article. We want the readers to enjoy these Phenomenological analyses.</p>
著者
森平 良 金子 寛彦
出版者
特定非営利活動法人 日本バーチャルリアリティ学会
雑誌
日本バーチャルリアリティ学会論文誌 (ISSN:1344011X)
巻号頁・発行日
vol.23, no.4, pp.253-261, 2018

<p>We investigated whether a sensory information, which is not related with self-motion originally, can be recruited as a new cue for self-motion. In the learning phase of an experimental trial, stimulus color changed depending on the acceleration of body rotation about the yaw axis. The stimulus color changed to red when subjects rotated with clockwise acceleration and to green when subjects rotated with counterclockwise acceleration, or vice versa. In the measurement phases before and after the learning phase, subjects viewed the rotating stimulus with or without new self-motion (color) cue and responded the occurrence and magnitude of vection. The results showed that the color information accompanied with self-motion affected the latency of vection, suggesting that new self-motion cue of color could contribute to generate vection.</p>
著者
中村 文彦 Adrien Verhulst 櫻田 国治 福岡 正彬 杉本 麻樹
出版者
特定非営利活動法人 日本バーチャルリアリティ学会
雑誌
日本バーチャルリアリティ学会論文誌 (ISSN:1344011X)
巻号頁・発行日
vol.26, no.4, pp.298-309, 2021-12-24 (Released:2021-12-24)
参考文献数
19

In this paper, we propose Virtual Whiskers, a spatial directional guidance technique by stimulating a cheek using tiny robot arms attached to a Head-Mounted Display (HMD). We performed a directional guidance experiment to evaluate how accurately our method provide the target direction. The result shows our method achieves the average absolute directional error of 2.76 degrees in the azimuthal plane and 7.32 degrees in the elevation plane. We also investigated how effective the cheek stimulation is in a target search task. The average of task completion time in visual condition was M=12.45s, and visual+haptic condition resulted in M=6.91s. Statistical test revealed a significant difference between visual condition and visual+haptic condition.
著者
金 ジョンヒョン 橋田 朋子 苗村 健
出版者
特定非営利活動法人 日本バーチャルリアリティ学会
雑誌
日本バーチャルリアリティ学会論文誌 (ISSN:1344011X)
巻号頁・発行日
vol.18, no.3, pp.393-399, 2013-09-30 (Released:2017-02-01)

Focusing on the drawing sound as auditory feedback in the act of writing with an ordinary paper and pen, we have studied the effect of emphasized drawing sound. In this paper, we explain availability of emphasized auditory feedback of drawing sound in professional animation studio. In specific, we introduced our proposed system to animation producing process and performed a user study for 6 weeks to confirm its availability. The results from user study showed that animators used our proposed system at a rate of 93.0% during their total day and average of 5 hours a day. Moreover, we obtained the positive feedback in the interview such that they can draw dark and uniformly-thick line in quality by listening to drawing sound.
著者
加藤 優貴 長町 和弥 杉本 麻樹 稲見 昌彦 北崎 充晃
出版者
特定非営利活動法人 日本バーチャルリアリティ学会
雑誌
日本バーチャルリアリティ学会論文誌 (ISSN:1344011X)
巻号頁・発行日
vol.26, no.1, pp.22-31, 2021

<p>Pseudo physical contact is used for communication in virtual environments such as VRChat. We hypothesized that the pseudo physical contact could affect interpersonal impression and communication, and the interpersonal impression would be modulated by appearance of body or avatar type. To test these hypotheses, we performed a questionnaire survey for VRChat users (N=341). In results, interpersonal impression and communication difficulty were improved after the pseudo physical contact, but the avatar type did not modulate the interpersonal impression. These results suggest that the pseudo physical contact could improve the interpersonal impression and communication in virtual environments.</p>
著者
松丸 隆文
出版者
特定非営利活動法人 日本バーチャルリアリティ学会
雑誌
日本バーチャルリアリティ学会論文誌 (ISSN:1344011X)
巻号頁・発行日
vol.11, no.2, pp.283-292, 2006-06-30 (Released:2017-02-01)
参考文献数
103
被引用文献数
1

This paper discusses the design method of the bodily shape and motion of a humanoid robot to raise not only the emotional but informative interpersonal-affinity of a humanoid robot. Concrete knowledge and a concrete opinion are classified into the movement prediction from its configuration and the movement prediction from continuous motion or preliminary motion, and they are discussed mentioning the application and usage. Specifically, the bodily shape and motion, which is easier to predict and understand the capability and performance and the following action and intention of a humanoid robot for the surrounding people who are looking at it, are considered.
著者
横小路 泰義 菅原 嘉彦 吉川 恒夫
出版者
特定非営利活動法人 日本バーチャルリアリティ学会
雑誌
日本バーチャルリアリティ学会論文誌 (ISSN:1344011X)
巻号頁・発行日
vol.4, no.4, pp.589-598, 1999
被引用文献数
14

In this paper, we propose a method for accurate image overlay on headmounted displays (HMDs) using vision and accelerometers. The proposed method is suitable for video see-through HMDs in augmented reality applications but not limited to them. Acceleration information is used for predicting the head motion to compensate end-to-end system delay and to make the vision-based tracking robust. Experimental results showed that the proposed method can reduce alignment errors within 6 pixels on average and ll pixels at maximum, even if the user moves his/her head quickly (with 10 [m/s^2] and 49 [rad/s^2] at the maximum).
著者
竹田 信子 加藤 博一 西田 正吾
出版者
特定非営利活動法人 日本バーチャルリアリティ学会
雑誌
日本バーチャルリアリティ学会論文誌 (ISSN:1344011X)
巻号頁・発行日
vol.12, no.4, pp.595-601, 2007
参考文献数
13

In this paper, we introduce a virtual pop-up book system using Augmented Reality technology. This system displays 3D virtual objects on the real book based on pose and position estimation of the camera. Although many marker-based methods have been proposed, a picture book with markers looks unattractive. Our system does not use any markers. We describe four advantages for the virtual pop-up book. Firstly, 3D rendering helps readers understand scenes. Secondly, characters look lively by using motions. Thirdly, author of the picture book can use the new representation which mixes 2D and 3D rendering. Lastly, it can express time changes using animation.
著者
加藤 由訓 苗村 健
出版者
特定非営利活動法人 日本バーチャルリアリティ学会
雑誌
日本バーチャルリアリティ学会論文誌 (ISSN:1344011X)
巻号頁・発行日
vol.18, no.3, pp.345-356, 2013-09-30 (Released:2017-02-01)

This paper proposes a method for sharing impression, sentiment, and opinion among people listening to an auditory program together by voice sound effects. We call the system "Radi-Hey." In contrast to the conventional Laugh Tracks played by program staff in TV and radio programs, Radi-Hey reflects the input from audiences themselves. The audiences input their opinion by pushing buttons of several short words (e.g. "oh!", "why?") and can listen to the other audiences' opinion by voice sound effects. Recently, text-based systems (e.g. Twitter) have been used for this purpose, but the audiences are required to concentrate on inputting their message and viewing the others' message. The aim of this paper is to realize high level of simplicity that provides much more prompt and easy sharing of the others' opinion by auditory feedback. We conducted two experimental demonstrations: radio broadcasting programs, and presentations at an academic conference. This paper describes the results showing potential applicability of the system, and the pros and cons for the future development.
著者
金 海永 鈴木 陽一 高根 昭一 小澤 賢司 曽根 敏夫
出版者
特定非営利活動法人 日本バーチャルリアリティ学会
雑誌
日本バーチャルリアリティ学会論文誌 (ISSN:1344011X)
巻号頁・発行日
vol.4, no.2, pp.455-460, 1999-06-30 (Released:2017-02-01)
参考文献数
17

It has been shown that HRTFs (Head Related Transfer Functions) are important cues for judging the absolute auditory distance of a single sound image when the sound source is close to the listener. In order to investigate the role of HRTFs in auditory distance perception more generally, not only the absolute distance of a single sound image but also the relative distances among multiple sound images should be considered. From this point of view, two kinds of psychoacoustical experiments on absolute and relative distance perception were conducted with the same source signals for stimuli. Comparison of the results of the two experiments showed that while the absolutely judged distance of a sound image increases with the actual distance of sound source up to around 1.2 m, the results of the relative auditory distance perception showed that the perceived distance significantly increases up to 2〜3 m. This difference may be attributable to some perceptual information stored in a short-term memory provided by the comparison of auditory distances between multiple sound images, which could offer an additional cue in relative distance perception, and it may be effective to improve the resolution of the distance perception of sound images at longer distance than the limitation in absolute distance perception.
著者
落合 陽一
出版者
特定非営利活動法人 日本バーチャルリアリティ学会
雑誌
日本バーチャルリアリティ学会論文誌 (ISSN:1344011X)
巻号頁・発行日
vol.15, no.3, pp.463-466, 2010-09-30 (Released:2017-02-01)
参考文献数
6

Visible Breadboard is the Breadboard like interface which shows voltages of each and every hole by full color LED and enable us to make wiring by tracing with finger tips. Users could insert electrical material into the holes and make a circuit on this device. Users could understand what is happening in the circuit and correct the connections with finger tracing. This device could support essential understanding of making a circuit and what happens in a circuit. Visible Breadboard may be applied to many areas such as prototyping, education, designing, media art or entertainment.
著者
松尾 政輝 坂尻 正次 三浦 貴大 大西 淳児 小野 束
出版者
特定非営利活動法人 日本バーチャルリアリティ学会
雑誌
日本バーチャルリアリティ学会論文誌 (ISSN:1344011X)
巻号頁・発行日
vol.21, no.2, pp.303-312, 2016

<p>Although many computer games had become diversified in recent years, a lot of effort and ingenuity is needed to produce games that persons with a total visual impairment can enjoy. On the other hand, some games for visually impaired persons have been developed. However, games that use only auditory information present challenges for sighted persons. Unfortunately, no games exist that both sighted and visually impaired persons can enjoy together. It is difficult for visually impaired persons to play the same game with sighted persons and for sighted and visually impaired persons to share a common subject. In this paper, we aim to develop a accessible action roll playing game (RPG) that both sighted and visually impaired persons can play using their dominant senses including visual, auditory and tactile senses. To develop the game, we also develop a field creation tool for a game developer with visual impairments, and provided an integrated game development environment for them. In this paper, we describe the development and reflections of the accessible action RPG and our game development environment.</p>
著者
野田 真一 伴 好弘 佐藤 宏介 千原 國宏
出版者
特定非営利活動法人 日本バーチャルリアリティ学会
雑誌
日本バーチャルリアリティ学会論文誌 (ISSN:1344011X)
巻号頁・発行日
vol.4, no.4, pp.665-670, 1999-12-31 (Released:2017-02-01)
参考文献数
13
被引用文献数
3

Mixed Reality (MR) conjuncts the real world with a virtual world created by a computer. Optical superimposing has been a promising MR displaying technology so that it does not degrade the quality of the user's view. Unfortunately, optical see-through displays can not represent correct occlusion phenomenon; a CG object can not occlude a real object, but overlaps on the real one translucently. In this paper an architecture of an optical see-through MR display, which solves the above problem, is proposed. It can represent correct occlusion phenomenon among the real and virtual objects. Two types of new prerequisite elements, a realtime range finder and an active pattern light source with a video projector, are involved into the display. The dynamic active pattern light projection illuminates only not occluded portions of the real object in darkroom, according to the interference among 3D models of the virtual object and real one which is acquired by the range finder. Therefore, the occluded portions of the real object is not visible due to no illumination; the CG object occludes the real object approximately. A prototype display based on the architecture shows us interesting interactions between the virtual world and the real world including user's hand operations.
著者
中島 武三志 植井 康介 飯田 隆太郎
出版者
特定非営利活動法人 日本バーチャルリアリティ学会
雑誌
日本バーチャルリアリティ学会論文誌 (ISSN:1344011X)
巻号頁・発行日
vol.25, no.2, pp.127-137, 2020 (Released:2020-06-30)
参考文献数
26

Cross-modality is applied as a haptic feedback method in a virtual / mixed reality (VR/MR) environment using a head mount display (HMD). Such a haptic feedback method based on cross-modal illusion is seen as a method reducing burden on the user. Particularly, there are reports of haptic sensory illusion that occurs when a virtual object is touched in MR environment which can be applied to haptic feedback methods. But there is a problem that the haptic sensory illusion obtained from this method is weak. In anticipation of a more reliable haptic feedback method, we focused our research on hearing as it is a modality co-occurring with the sense of touch. In this paper, we aim to understand the influence of auditory stimuli on haptic sensory illusions when touching virtual objects in an MR environment. In particular, we considered the influence of adding simulated contact sound when a user touches a virtual object with his or her palm on the magnitude and impression of the haptic sensory illusion. Regarding magnitude, the haptic sensory illusion was increased from adding auditory stimuli. Regarding impression, a soft impression changed to a hard impression by adding auditory stimuli, in one of the trials.
著者
新島 有信 小川 剛史
出版者
特定非営利活動法人 日本バーチャルリアリティ学会
雑誌
日本バーチャルリアリティ学会論文誌 (ISSN:1344011X)
巻号頁・発行日
vol.21, no.1, pp.93-100, 2016-03-31 (Released:2017-02-01)

In this paper, we investigated to control a phantom sensation by visual stimuli. A phantom sensation is one of tactile illusion caused by vibration stimuli. Some previous works employed vibration motors for a tactile display, and utilized a phantom sensation to present tactile stimuli in a large area with a few vibration motors. Our research is to control tactile perception by visual stimuli. Our previous works showed that visual stimuli influence tactile perception. From the results, we considered that it is also possible to control a phantom sensation by visual stimuli. We made a primitive visual-tactile display, and conducted some experiments. The results showed that visual stimuli influenced a phantom sensation and it seemed to be possible to cause or not to cause a phantom sensation by visual stimuli.