著者
Sorn Jarukasemratana Tsuyoshi Murata
出版者
Information and Media Technologies Editorial Board
雑誌
Information and Media Technologies (ISSN:18810896)
巻号頁・発行日
vol.8, no.4, pp.944-960, 2013 (Released:2013-12-15)
参考文献数
50

Large graph visualization tools are important instruments for researchers to understand large graph data sets. Currently there are many tools available for download and use under free license, others in research papers or journals, each with its own functionalities and capabilities. This review focuses on giving an introduction to those large graph visualization tools and emphasizes their advantages over other tools. Criteria for selection of the tools being reviewed are it was recently published (2009 or later), or a new version was released during the last two years. The tools being reviewed in this paper are igraph, Gephi, Cytoscape, Tulip, WiGis, CGV, VisANT, Pajek, In Situ Framework, Honeycomb and two visualization toolkits which are JavaScript InfoVis Toolkit and GraphGL. The last part of the review presents our suggestion on building large graph visualization platform based on advantages of tools and toolkits that are being reviewed.
著者
Kenji Imamura Kuniko Saito Kugatsu Sadamitsu Hitoshi Nishikawa
出版者
Information and Media Technologies Editorial Board
雑誌
Information and Media Technologies (ISSN:18810896)
巻号頁・発行日
vol.9, no.4, pp.834-856, 2014 (Released:2014-12-15)
参考文献数
20

This paper shows how to correct the grammatical errors of Japanese particles made by Japanese learners. Our method is based on discriminative sequence conversion, which converts one sequence of words into another and corrects particle errors by substitution, insertion, or deletion. However, it is difficult to collect large learners' corpora. We solve this problem with a discriminative learning framework that uses the following two methods. First, language model probabilities obtained from large, raw text corpora are combined with n-gram binary features obtained from learners' corpora. This method is applied to measure the accuracy of Japanese sentences. Second, automatically generated pseudo-error sentences are added to learners' corpora to enrich the corpora directly. Furthermore, we apply domain adaptation, in which the pseudo-error sentences (the source domain) are adapted to the real error sentences (the target domain). Experiments show that the recall rate is improved using both language model probabilities and n-gram binary features. Stable improvement is achieved using pseudo-error sentences with domain adaptation.
著者
Tsuyoshi Tasaki Shohei Matsumoto Hayato Ohba Shunichi Yamamoto Mitsuhiko Toda Kazunori Komatani Tetsuya Ogata Hiroshi G. Okuno
出版者
Information and Media Technologies Editorial Board
雑誌
Information and Media Technologies (ISSN:18810896)
巻号頁・発行日
vol.1, no.1, pp.285-295, 2006 (Released:2006-06-15)
参考文献数
21

Research on human-robot interaction is getting an increasing amount of attention. Since most research has dealt with communication between one robot and one person, quite few researchers have studied communication between a robot and multiple people. This paper presents a method that enables robots to communicate with multiple people using the “selection priority of the interactive partner” based on the concept of Proxemics. In this method, a robot changes active sensory-motor modalities based on the interaction distance between itself and a person. Our method was implemented into a humanoid robot, SIG2. SIG2 has various sensory-motor modalities to interact with humans. A demonstration of SIG2 showed that our method selected an appropriate interaction partner during interaction with multiple people.
著者
Daniel Sangorrín Shinya Honda Hiroaki Takada
出版者
Information and Media Technologies Editorial Board
雑誌
Information and Media Technologies (ISSN:18810896)
巻号頁・発行日
vol.8, no.1, pp.1-17, 2013 (Released:2013-03-15)
参考文献数
32

Dual-OS communications allow a real-time operating system (RTOS) and a general-purpose operating system (GPOS)—sharing the same processor through virtualization—to collaborate in complex distributed applications. However, they also introduce new threats to the reliability (e.g., memory and time isolation) of the RTOS that need to be considered. Traditional dual-OS communication architectures follow essentially the same conservative approach which consists of extending the virtualization layer with new communication primitives. Although this approach may be able to address the aforementioned reliability threats, it imposes a rather big overhead on communications due to unnecessary data copies and context switches.In this paper, we propose a new dual-OS communications approach able to accomplish efficient communications without compromising the reliability of the RTOS. We implemented our architecture on a physical platform using a highly reliable dual-OS system (SafeG) which leverages ARM TrustZone hardware to guarantee the reliability of the RTOS. We observed from the evaluation results that our approach is effective at minimizing communication overhead while satisfying the strict reliability requirements of the RTOS.
著者
Yuta Harada Hirotaka Ono Kunihiko Sadakane Masafumi Yamashita
出版者
Information and Media Technologies Editorial Board
雑誌
Information and Media Technologies (ISSN:18810896)
巻号頁・発行日
vol.2, no.4, pp.1103-1112, 2007 (Released:2007-12-15)
参考文献数
9

The matching of a bipartite graph is a structure that can be seen in various assignment problems and has long been studied. The semi-matching is an extension of the matching for a bipartite graph G =(U ∪ V, E). It is defined as a set of edges, M ⊆ E, such that each vertex in U is an endpoint of exactly one edge in M. The load-balancing problem is the problem of finding a semi-matching such that the degrees of each vertex in V are balanced. This problem is studied in the context of the task scheduling to find a “balanced” assignment of tasks for machines, and an O(¦E¦¦U¦) time algorithm is proposed. On the other hand, in some practical problems, only balanced assignments are not sufficient, e.g., the assignment of wireless stations (users)to access points (APs) in wireless networks. In wireless networks, the quality of the transmission depends on the distance between a user and its AP; shorter distances are more desirable. In this paper, We formulate the min-weight load-balancing problem of finding a balanced semi-matching that minimizes the total weight for weighted bipartite graphs. We then give an optimal condition of weighted semi-matchings and propose an O(¦E¦¦U¦¦V¦) time algorithm.
著者
Dan Han Pascual Martínez-Gómez Yusuke Miyao Katsuhito Sudoh Masaaki Nagata
出版者
Information and Media Technologies Editorial Board
雑誌
Information and Media Technologies (ISSN:18810896)
巻号頁・発行日
vol.9, no.3, pp.272-301, 2014 (Released:2014-09-15)
参考文献数
44

In statistical machine translation, Chinese and Japanese is a well-known long-distance language pair that causes difficulties to word alignment techniques. Pre-reordering methods have been proven efficient and effective; however, they need reliable parsers to extract the syntactic structure of the source sentences. On one hand, we propose a framework in which only part-of-speech (POS) tags and unlabeled dependency parse trees are used to minimize the influence of parse errors, and linguistic knowledge on structural difference is encoded in the form of reordering rules. We show significant improvements in translation quality of sentences in the news domain over state-of-the-art reordering methods. On the other hand, we explore the relationship between dependency parsing and our pre-reordering method from two aspects: POS tags and dependencies. We observe the effects of different parse errors on reordering performance by combining empirical and descriptive approaches. In the empirical approach, we quantify the distribution of general parse errors along with reordering quality. In the descriptive approach, we extract seven influential error patterns and examine their correlations with reordering errors.
著者
Munehiro Takimoto
出版者
Information and Media Technologies Editorial Board
雑誌
Information and Media Technologies (ISSN:18810896)
巻号頁・発行日
vol.7, no.2, pp.659-666, 2012 (Released:2012-06-15)
参考文献数
15

Partial dead code elimination (PDE) is a powerful code optimization technique that extends dead code elimination based on code motion. PDE eliminates assignments that are dead on some execution paths and alive on others. Hence, it can not only eliminate partially dead assignments but also move loop-invariant assignments out of loops. These effects are achieved by interleaving dead code elimination and code sinking. Hence, it is important to capture second-order effects between them, which can be reflected by repetitions. However, this process is costly. This paper proposes a technique that applies PDE to each assignment on demand. Our technique checks the safety of each code motion so that no execution path becomes longer. Because checking occurs on a demand-driven basis, the checking range may be restricted. In addition, because it is possible to check whether an assignment should be inserted at the blocking point of the code motion by performing a demand-driven analysis, PDE analysis can be localized to a restricted region. Furthermore, using the demand-driven property, our technique can be applied to each statement in a reverse postorder for a reverse control flow graph, allowing it to capture many second-order effects. We have implemented our technique as a code optimization phase and compared it with previous studies in terms of optimization and execution costs of the target code. As a result, our technique is as efficient as a single application of PDE and as effective as multiple applications of PDE.
著者
Iwata Satoshi Kono Kenji
出版者
Information and Media Technologies Editorial Board
雑誌
Information and Media Technologies (ISSN:18810896)
巻号頁・発行日
vol.7, no.1, pp.141-152, 2012

Performance anomalies in web applications are becoming a huge problem and the increasing complexity of modern web applications has made it much more difficult to identify their root causes. The first step toward hunting for root causes is to narrow down suspicious components that cause performance anomalies. However, even this is difficult when several performance anomalies simultaneously occur in a web application; we have to determine if their root causes are the same or not. We propose a novel method that helps us narrow down suspicious components called <i>performance anomaly clustering</i>, which clusters anomalies based on their root causes. If two anomalies are clustered together, they are affected by the same root cause. Otherwise, they are affected by different root causes. The key insight behind our method is that anomaly measurements that are negatively affected by the same root cause deviate similarly from standard measurements. We compute the similarity in deviations from the non-anomalous distribution of measurements, and cluster anomalies based on this similarity. The results from case studies, which were conducted using RUBiS, which is an auction prototype modeled after eBay.com, are encouraging. Our clustering method output clusters crucial in the search for root causes. Guided by the clustering results, we searched for components exclusively used by each cluster and successfully determined suspicious components, such as the Apache web server, Enterprise Beans, and methods in Enterprise Beans. The root causes we found were shortages in network connections, inadequate indices in the database, and incorrect issues with SQLs, and so on.
著者
Yasuto Arakaki Hayaru Shouno Kazuyuki Takahashi Takashi Morie
出版者
Information and Media Technologies Editorial Board
雑誌
Information and Media Technologies (ISSN:18810896)
巻号頁・発行日
vol.7, no.4, pp.1480-1488, 2012 (Released:2012-12-15)
参考文献数
15

For the detection of generic objects in the field of image processing, histograms of orientation gradients (HOG) is discussed for these years. The performance of the classification system using HOG shows a good result. However, the performance of using HOG descriptor would be influenced by the detecting object size. In order to overcome this problem, we introduce a kind of hierarchy inspired from the convolution-net, which is a model of our visual processing system in the brain. The hierarchical HOG (H-HOG) integrates several scales of HOG descriptors in its architecture, and represents the input image as the combinatorial of more complex features rather than that of the orientation gradients. We investigate the H-HOG performance and compare with the conventional HOG. In the result, we obtain the better performance rather than the conventional HOG. Especially the size of representation dimension is much smaller than the conventional HOG without reducing the detecting performance.