著者
Yoshifumi Manabe Tatsuaki Okamoto
出版者
Information and Media Technologies 編集運営会議
雑誌
Information and Media Technologies (ISSN:18810896)
巻号頁・発行日
vol.7, no.3, pp.992-999, 2012 (Released:2012-09-15)
参考文献数
18

This paper discusses cake-cutting protocols when the cake is a heterogeneous good, represented by an interval on the real line. We propose a new desirable property, the meta-envy-freeness of cake-cutting, which has not been formally considered before. Meta-envy-free means there is no envy on role assignments, that is, no party wants to exchange his/her role in the protocol with the one of any other party. If there is an envy on role assignments, the protocol cannot be actually executed because there is no settlement on which party plays which role in the protocol. A similar definition, envy-freeness, is widely discussed. Envy-free means that no player wants to exchange his/her part of the cake with that of any other player's. Though envy-freeness was considered to be one of the most important desirable properties, envy-freeness does not prevent envy about role assignment in the protocols. We define meta-envy-freeness to formalize this kind of envy. We propose that simultaneously achieving meta-envy-free and envy-free is desirable in cake-cutting. We show that current envy-free cake-cutting protocols do not satisfy meta-envy-freeness. Formerly proposed properties such as strong envy-free, exact, and equitable do not directly consider this type of envy and these properties are very difficult to realize. This paper then shows cake-cutting protocols for two and three party cases that simultaneously achieves envy-free and meta-envy-free. Last, we show meta-envy-free pie-cutting protocols.
著者
Sorn Jarukasemratana Tsuyoshi Murata
出版者
Information and Media Technologies Editorial Board
雑誌
Information and Media Technologies (ISSN:18810896)
巻号頁・発行日
vol.8, no.4, pp.944-960, 2013 (Released:2013-12-15)
参考文献数
50

Large graph visualization tools are important instruments for researchers to understand large graph data sets. Currently there are many tools available for download and use under free license, others in research papers or journals, each with its own functionalities and capabilities. This review focuses on giving an introduction to those large graph visualization tools and emphasizes their advantages over other tools. Criteria for selection of the tools being reviewed are it was recently published (2009 or later), or a new version was released during the last two years. The tools being reviewed in this paper are igraph, Gephi, Cytoscape, Tulip, WiGis, CGV, VisANT, Pajek, In Situ Framework, Honeycomb and two visualization toolkits which are JavaScript InfoVis Toolkit and GraphGL. The last part of the review presents our suggestion on building large graph visualization platform based on advantages of tools and toolkits that are being reviewed.
著者
Ippei Torii Kaoruko Ohtani Takahito Niwa Naohiro Ishii
出版者
Information and Media Technologies 編集運営会議
雑誌
Information and Media Technologies (ISSN:18810896)
巻号頁・発行日
vol.7, no.3, pp.1173-1179, 2012 (Released:2012-09-15)
参考文献数
13

This paper attempts to activate a large scale shopping district (shotengai) using new internet techniques. Recently decline of shotengai is a serious problem by development of large shopping centers. We made a new approach with internet techniques to activate shotengai, which is a typical Japanese shopping district. The Osu Shotengai is one of the most famous shotengai in Nagoya, Japan, which includes about 400 stores. We developed Osu shotengai official web site, called “At Osu.” First, the information of 400 stores in Osu shotengai, which includes 9 streets, was collected. Then we created an interactive “Information Visualization System” to put fresh information of shotengai on the web site in real time. It includes “Comment Upload System, ” where store owners can upload their comments and informing news directly on the web site. Further, we developed a new approach to stimulate store owners motivations for participating in the web site. And we also mention about an attractive and interactive web design using twitters to get opinions of users. By developing the new web site, the number of visitors of “At Osu” has increased rapidly. Many articles about this new approach to activate shotengai with a web site were published in newspapers or magazines and we have receives many inquiries.
著者
Kenji Imamura Kuniko Saito Kugatsu Sadamitsu Hitoshi Nishikawa
出版者
Information and Media Technologies Editorial Board
雑誌
Information and Media Technologies (ISSN:18810896)
巻号頁・発行日
vol.9, no.4, pp.834-856, 2014 (Released:2014-12-15)
参考文献数
20

This paper shows how to correct the grammatical errors of Japanese particles made by Japanese learners. Our method is based on discriminative sequence conversion, which converts one sequence of words into another and corrects particle errors by substitution, insertion, or deletion. However, it is difficult to collect large learners' corpora. We solve this problem with a discriminative learning framework that uses the following two methods. First, language model probabilities obtained from large, raw text corpora are combined with n-gram binary features obtained from learners' corpora. This method is applied to measure the accuracy of Japanese sentences. Second, automatically generated pseudo-error sentences are added to learners' corpora to enrich the corpora directly. Furthermore, we apply domain adaptation, in which the pseudo-error sentences (the source domain) are adapted to the real error sentences (the target domain). Experiments show that the recall rate is improved using both language model probabilities and n-gram binary features. Stable improvement is achieved using pseudo-error sentences with domain adaptation.
著者
Tsuyoshi Tasaki Shohei Matsumoto Hayato Ohba Shunichi Yamamoto Mitsuhiko Toda Kazunori Komatani Tetsuya Ogata Hiroshi G. Okuno
出版者
Information and Media Technologies Editorial Board
雑誌
Information and Media Technologies (ISSN:18810896)
巻号頁・発行日
vol.1, no.1, pp.285-295, 2006 (Released:2006-06-15)
参考文献数
21

Research on human-robot interaction is getting an increasing amount of attention. Since most research has dealt with communication between one robot and one person, quite few researchers have studied communication between a robot and multiple people. This paper presents a method that enables robots to communicate with multiple people using the “selection priority of the interactive partner” based on the concept of Proxemics. In this method, a robot changes active sensory-motor modalities based on the interaction distance between itself and a person. Our method was implemented into a humanoid robot, SIG2. SIG2 has various sensory-motor modalities to interact with humans. A demonstration of SIG2 showed that our method selected an appropriate interaction partner during interaction with multiple people.
著者
Daniel Sangorrín Shinya Honda Hiroaki Takada
出版者
Information and Media Technologies Editorial Board
雑誌
Information and Media Technologies (ISSN:18810896)
巻号頁・発行日
vol.8, no.1, pp.1-17, 2013 (Released:2013-03-15)
参考文献数
32

Dual-OS communications allow a real-time operating system (RTOS) and a general-purpose operating system (GPOS)—sharing the same processor through virtualization—to collaborate in complex distributed applications. However, they also introduce new threats to the reliability (e.g., memory and time isolation) of the RTOS that need to be considered. Traditional dual-OS communication architectures follow essentially the same conservative approach which consists of extending the virtualization layer with new communication primitives. Although this approach may be able to address the aforementioned reliability threats, it imposes a rather big overhead on communications due to unnecessary data copies and context switches.In this paper, we propose a new dual-OS communications approach able to accomplish efficient communications without compromising the reliability of the RTOS. We implemented our architecture on a physical platform using a highly reliable dual-OS system (SafeG) which leverages ARM TrustZone hardware to guarantee the reliability of the RTOS. We observed from the evaluation results that our approach is effective at minimizing communication overhead while satisfying the strict reliability requirements of the RTOS.
著者
Yuta Harada Hirotaka Ono Kunihiko Sadakane Masafumi Yamashita
出版者
Information and Media Technologies Editorial Board
雑誌
Information and Media Technologies (ISSN:18810896)
巻号頁・発行日
vol.2, no.4, pp.1103-1112, 2007 (Released:2007-12-15)
参考文献数
9

The matching of a bipartite graph is a structure that can be seen in various assignment problems and has long been studied. The semi-matching is an extension of the matching for a bipartite graph G =(U ∪ V, E). It is defined as a set of edges, M ⊆ E, such that each vertex in U is an endpoint of exactly one edge in M. The load-balancing problem is the problem of finding a semi-matching such that the degrees of each vertex in V are balanced. This problem is studied in the context of the task scheduling to find a “balanced” assignment of tasks for machines, and an O(¦E¦¦U¦) time algorithm is proposed. On the other hand, in some practical problems, only balanced assignments are not sufficient, e.g., the assignment of wireless stations (users)to access points (APs) in wireless networks. In wireless networks, the quality of the transmission depends on the distance between a user and its AP; shorter distances are more desirable. In this paper, We formulate the min-weight load-balancing problem of finding a balanced semi-matching that minimizes the total weight for weighted bipartite graphs. We then give an optimal condition of weighted semi-matchings and propose an O(¦E¦¦U¦¦V¦) time algorithm.
著者
Dan Han Pascual Martínez-Gómez Yusuke Miyao Katsuhito Sudoh Masaaki Nagata
出版者
Information and Media Technologies Editorial Board
雑誌
Information and Media Technologies (ISSN:18810896)
巻号頁・発行日
vol.9, no.3, pp.272-301, 2014 (Released:2014-09-15)
参考文献数
44

In statistical machine translation, Chinese and Japanese is a well-known long-distance language pair that causes difficulties to word alignment techniques. Pre-reordering methods have been proven efficient and effective; however, they need reliable parsers to extract the syntactic structure of the source sentences. On one hand, we propose a framework in which only part-of-speech (POS) tags and unlabeled dependency parse trees are used to minimize the influence of parse errors, and linguistic knowledge on structural difference is encoded in the form of reordering rules. We show significant improvements in translation quality of sentences in the news domain over state-of-the-art reordering methods. On the other hand, we explore the relationship between dependency parsing and our pre-reordering method from two aspects: POS tags and dependencies. We observe the effects of different parse errors on reordering performance by combining empirical and descriptive approaches. In the empirical approach, we quantify the distribution of general parse errors along with reordering quality. In the descriptive approach, we extract seven influential error patterns and examine their correlations with reordering errors.
著者
Munehiro Takimoto
出版者
Information and Media Technologies Editorial Board
雑誌
Information and Media Technologies (ISSN:18810896)
巻号頁・発行日
vol.7, no.2, pp.659-666, 2012 (Released:2012-06-15)
参考文献数
15

Partial dead code elimination (PDE) is a powerful code optimization technique that extends dead code elimination based on code motion. PDE eliminates assignments that are dead on some execution paths and alive on others. Hence, it can not only eliminate partially dead assignments but also move loop-invariant assignments out of loops. These effects are achieved by interleaving dead code elimination and code sinking. Hence, it is important to capture second-order effects between them, which can be reflected by repetitions. However, this process is costly. This paper proposes a technique that applies PDE to each assignment on demand. Our technique checks the safety of each code motion so that no execution path becomes longer. Because checking occurs on a demand-driven basis, the checking range may be restricted. In addition, because it is possible to check whether an assignment should be inserted at the blocking point of the code motion by performing a demand-driven analysis, PDE analysis can be localized to a restricted region. Furthermore, using the demand-driven property, our technique can be applied to each statement in a reverse postorder for a reverse control flow graph, allowing it to capture many second-order effects. We have implemented our technique as a code optimization phase and compared it with previous studies in terms of optimization and execution costs of the target code. As a result, our technique is as efficient as a single application of PDE and as effective as multiple applications of PDE.
著者
Tomoki Watanabe Satoshi Ito Kentaro Yokoi
出版者
Information and Media Technologies 編集運営会議
雑誌
Information and Media Technologies (ISSN:18810896)
巻号頁・発行日
vol.5, no.2, pp.659-667, 2010 (Released:2010-06-15)
参考文献数
23

The purpose of the work reported in this paper is to detect humans from images. This paper proposes a method for extracting feature descriptors consisting of co-occurrence histograms of oriented gradients (CoHOG). Including co-occurrence with various positional offsets, the feature descriptors can express complex shapes of objects with local and global distributions of gradient orientations. Our method is evaluated with a simple linear classifier on two well-known human detection benchmark datasets: “DaimlerChrysler pedestrian classification benchmark dataset” and “INRIA person data set”. The results show that our method reduces the miss rate by half compared with HOG, and outperforms the state-of-the-art methods on both datasets. Furthermore, as an example of a practical application, we applied our method to a surveillance video eight hours in length. The result shows that our method reduces false positives by half compared with HOG. In addition, CoHOG can be calculated 40% faster than HOG.
著者
Piyoros Tungthamthiti Kiyoaki Shirai Masnizah Mohd
出版者
Information and Media Technologies 編集運営会議
雑誌
Information and Media Technologies (ISSN:18810896)
巻号頁・発行日
vol.12, pp.80-102, 2017 (Released:2017-06-15)
参考文献数
29

Recognition of sarcasm in microblogging is important in a range of NLP applications, such as opinion mining. However, this is a challenging task, as the real meaning of a sarcastic sentence is the opposite of the literal meaning. Furthermore, microblogging messages are short and usually written in a free style that may include misspellings, grammatical errors, and complex sentence structures. This paper proposes a novel method for identifying sarcasm in tweets. It combines two supervised classifiers, a Support Vector Machine (SVM) using N-gram features and an SVM using our proposed features. Our features represent the intensity and contradictions of sentiment in a tweet, derived by sentiment analysis. The sentiment contradiction feature also considers coherence among multiple sentences in the tweet, and this is automatically identified by our proposed method using unsupervised clustering and an adaptive genetic algorithm. Furthermore, a method for identifying the concepts of unknown sentiment words is used to compensate for gaps in the sentiment lexicon. Our method also considers punctuation and the special symbols that are frequently used in Twitter messaging. Experiments using two datasets demonstrated that our proposed system outperformed baseline systems on one dataset, while producing comparable results on the other. Accuracy of 82% and 76% was achieved in sarcasm identification on the two datasets.
著者
Yu Liu Kento Emoto Kiminori Matsuzaki Zhenjiang Hu
出版者
Information and Media Technologies 編集運営会議
雑誌
Information and Media Technologies (ISSN:18810896)
巻号頁・発行日
vol.9, no.1, pp.73-82, 2014 (Released:2014-03-15)
参考文献数
28

MapReduce programming model attracts a lot of enthusiasm among both industry and academia, largely because it simplifies the implementations of many data parallel applications. In spite of the simplicity of the programming model, there are many applications that are hard to be implemented by MapReduce, due to their innate characters of computational dependency. In this paper we propose a new approach of using the programming pattern accumulate over MapReduce, to handle a large class of problems that cannot be simply divided into independent sub-computations. Using this accumulate pattern, many problems that have computational dependency can be easily expressed, and then the programs will be transformed to MapReduce programs executed on large clusters. Users without much knowledge of MapReduce can also easily write programs in a sequential manner but finally obtain efficient and scalable MapReduce programs. We describe the programming interface of our accumulate framework and explain how to transform a user-specified accumulate computation to an efficient MapReduce program. Our experiments and evaluations illustrate the usefulness and efficiency of the framework.
著者
Sayaka Akioka Yuki Ohno Midori Sugaya Tatsuo Nakajima
出版者
Information and Media Technologies 編集運営会議
雑誌
Information and Media Technologies (ISSN:18810896)
巻号頁・発行日
vol.6, no.4, pp.1149-1157, 2011 (Released:2011-12-15)
参考文献数
24

This paper proposes SPLiT (Scalable Performance Library Tool) as the methodology to improve performance of applications on multicore processors through CPU and cache optimizations on the fly. SPLiT is designed to relieve the difficulty of the performance optimization of parallel applications on multicore processors. Therefore, all programmers have to do to benefit from SPLiT is to add a few library calls to let SPLiT know which part of the application should be analyzed. This simple but compelling optimization library contributes to enrich pervasive servers on a multicore processor, which is a strong candidate for an architecture of information appliances in the near future. SPLiT analyzes and predicts application behaviors based on CPU cycle counts and cache misses. According to the analysis and predictions, SPLiT tries to allocate processes and threads sharing data onto the same physical cores in order to enhance cache efficiency. SPLiT also tries to separate cache effective codes from the codes with more cache misses for the purpose of the avoidance of cache pollutions, which result in performance degradation. Empirical experiments assuming web applications validated the efficiency of SPLiT and the performance of the web application is improved by 26%.
著者
Iwata Satoshi Kono Kenji
出版者
Information and Media Technologies Editorial Board
雑誌
Information and Media Technologies (ISSN:18810896)
巻号頁・発行日
vol.7, no.1, pp.141-152, 2012

Performance anomalies in web applications are becoming a huge problem and the increasing complexity of modern web applications has made it much more difficult to identify their root causes. The first step toward hunting for root causes is to narrow down suspicious components that cause performance anomalies. However, even this is difficult when several performance anomalies simultaneously occur in a web application; we have to determine if their root causes are the same or not. We propose a novel method that helps us narrow down suspicious components called <i>performance anomaly clustering</i>, which clusters anomalies based on their root causes. If two anomalies are clustered together, they are affected by the same root cause. Otherwise, they are affected by different root causes. The key insight behind our method is that anomaly measurements that are negatively affected by the same root cause deviate similarly from standard measurements. We compute the similarity in deviations from the non-anomalous distribution of measurements, and cluster anomalies based on this similarity. The results from case studies, which were conducted using RUBiS, which is an auction prototype modeled after eBay.com, are encouraging. Our clustering method output clusters crucial in the search for root causes. Guided by the clustering results, we searched for components exclusively used by each cluster and successfully determined suspicious components, such as the Apache web server, Enterprise Beans, and methods in Enterprise Beans. The root causes we found were shortages in network connections, inadequate indices in the database, and incorrect issues with SQLs, and so on.
著者
Yuuichi Nakano Mitsuo Iwadate Hideaki Umeyama Y-h. Taguchi
出版者
Information and Media Technologies 編集運営会議
雑誌
Information and Media Technologies (ISSN:18810896)
巻号頁・発行日
vol.9, no.1, pp.141-154, 2014 (Released:2014-03-15)
参考文献数
15

Type III secretion system (T3SS) effector protein is a part of bacterial secretion systems. T3SS exists in the pathogenic and symbiotic bacteria. How the T3SS effector proteins in these two classes differ from each other should be interesting. In this paper, we successfully discriminated T3SS effector proteins between plant pathogenic, animal pathogenic and plant symbiotic bacteria based on feature vectors inferred computationally by Yahara et al. only from amino acid sequences. This suggests that these three classes of bacteria employ distinct T3SS effector proteins. We also hypothesized that the feature vector proposed by Yahara et al. represents protein structure, possibly protein folds defined in Structural Classification of Proteins (SCOP) database.
著者
Satoshi Yoshida Takashi Uemura Takuya Kida Tatsuya Asai Seishi Okamoto
出版者
Information and Media Technologies 編集運営会議
雑誌
Information and Media Technologies (ISSN:18810896)
巻号頁・発行日
vol.7, no.1, pp.129-140, 2012 (Released:2012-03-15)
参考文献数
32

We address the problem of improving variable-length-to-fixed-length codes (VF codes). A VF code that we deal here with is an encoding scheme that parses an input text into variable length substrings and then assigns a fixed length codeword to each parsed substring. VF codes have favourable properties for fast decoding and fast compressed pattern matching, but they are worse in compression ratio than the latest compression methods. The compression ratio of a VF code depends on the parse tree used as a dictionary. To gain a better compression ratio we present several improvement methods for constructing parse trees. All of them are heuristical solutions since it is intractable to construct the optimal parse tree. We compared our methods with the previous VF codes, and showed experimentally that their compression ratios reach to the level of state-of-the-art compression methods.
著者
Yasuto Arakaki Hayaru Shouno Kazuyuki Takahashi Takashi Morie
出版者
Information and Media Technologies Editorial Board
雑誌
Information and Media Technologies (ISSN:18810896)
巻号頁・発行日
vol.7, no.4, pp.1480-1488, 2012 (Released:2012-12-15)
参考文献数
15

For the detection of generic objects in the field of image processing, histograms of orientation gradients (HOG) is discussed for these years. The performance of the classification system using HOG shows a good result. However, the performance of using HOG descriptor would be influenced by the detecting object size. In order to overcome this problem, we introduce a kind of hierarchy inspired from the convolution-net, which is a model of our visual processing system in the brain. The hierarchical HOG (H-HOG) integrates several scales of HOG descriptors in its architecture, and represents the input image as the combinatorial of more complex features rather than that of the orientation gradients. We investigate the H-HOG performance and compare with the conventional HOG. In the result, we obtain the better performance rather than the conventional HOG. Especially the size of representation dimension is much smaller than the conventional HOG without reducing the detecting performance.