- 著者
-
中臺 一博
日台 健一
溝口 博
奥乃 博
北野 宏明
- 出版者
- 一般社団法人 日本ロボット学会
- 雑誌
- 日本ロボット学会誌 (ISSN:02891824)
- 巻号頁・発行日
- vol.21, no.5, pp.517-525, 2003-07-15
- 参考文献数
- 11
- 被引用文献数
-
6
3
This paper describes a real-time human tracking system by audio-visual integrtation for the humanoid <I>SIG</I>. An essential idea for real-time and robust tracking is hierarchical integration of multi-modal information. The system creates three kinds of streams - auditory, visual and associated streams. An auditory stream with sound source direction is formed as temporal series of events from audition module which localizes multiple sound sources and cancels motor noise from a pair of microphones. A visual stream with a face ID and its 3D-position is formed as temporal series of events from vision module by combining face detection, face identification and face localization by stereo vision. Auditory and visual streams are associated into an associated stream, a higher level representation according to their proximity. Because the associated stream disambiguates parcially missing information in auditory or visual streams, “focus-of-attention” control of <I>SIG</I> works well enough to robust human tracking. These processes are executed in real-time with the delay of 200 msec using off-the-shelf PCs distributed via TCP/IP. As a result, robust human tracking is attained even when the person is visually occluded and simultaneous speeches occur.