著者
Yuta Sugimoto Atusi Maeda
出版者
Information Processing Society of Japan
雑誌
Journal of Information Processing (ISSN:18826652)
巻号頁・発行日
vol.26, pp.335-344, 2018 (Released:2018-03-15)
参考文献数
21

Packrat parsing is a recursive descent parsing method with backtracking and memoization. Parsers based on this method require no separate lexical analyzers, and backtracking enables those parsers to handle a wide range of complex syntactic constructs. Memoization is used to prevent exponential growth of running time, resulting in linear time complexity at th cost of linear space consumption. In this study, we propose CPEG - a library that can be used to write parsers using Packrat parsing in C language. This library enables programmers to describe syntactic rules in an internal domain-specific language (DSL) which, unlike parser combinators, does not require runtime data structures to represent syntax. Syntax rules are just expressed by plain C macros. The runtime routine does not dynamically allocate memory regions for memoization. Instead, statically allocated arrays are used as memoization cache tables. Therefore, programmers can implement practical parsers with CPEG, which does not depend on any specific memory management features, requiring fixed-sized memory (except for input string). To enhance usability, a translator to CPEG from an external DSL is provided, as well as a tuning mechanism to control memoization parameters. Parsing time compared to other systems when parsing JavaScript Object Notation and Java source files are given. The experimental results indicate that the performance of CPEG is competitive with other libraries.
著者
Ayako Akiyama Hasegawa Takuya Watanabe Eitaro Shioji Mitsuaki Akiyama Tatsuya Mori
出版者
Information Processing Society of Japan
雑誌
Journal of Information Processing (ISSN:18826652)
巻号頁・発行日
vol.28, pp.1030-1046, 2020 (Released:2020-12-15)
参考文献数
42

Online service providers exert tremendous effort to protect users' accounts against sensitive data breaches. Although threats from complete outsiders, such as account hijacking for monetization, still occur, recent studies have shed light on threats to privacy from insiders. In this study, we focus on these latter threats. Specifically, we present the first comprehensive study of an attack from insiders that identifies the existence of a target's account by using the target's email address and the insecure login-related messages that are displayed. Such a threat may violate intimates' or acquaintances' privacy because the kinds of service accounts a user has implies his/her personal preferences or situation. We conducted surveys regarding user expectations and behaviors on online services and an extensive measurement study of login-related messages on online services that are considered sensitive. We found that over 80% of participants answered that they have sensitive services and that almost all services were vulnerable to our attack. Moreover, about half the participants who have sensitive services were insecurely registered on them, thus could be potential victims. Finally, we recommend ways for online service providers to improve login-related messages and for users to take appropriate defensive actions. We also report our responsible disclosure process.
著者
Keita Miura Shota Tokunaga Yuki Horita Yasuhiro Oda Takuya Azumi
出版者
Information Processing Society of Japan
雑誌
Journal of Information Processing (ISSN:18826652)
巻号頁・発行日
vol.29, pp.227-235, 2021 (Released:2021-03-15)
参考文献数
17
被引用文献数
8

In recent year, autonomous vehicles have been developed worldwide. ROS, which is a middleware suitable for the development of a self-driving system, is rarely used in the automotive industry. MATLAB/Simulink, which is a development software suitable for Model-based development, is usually utilized. To integrate a program created with MATLAB/Simulink into a ROS-based self-driving system, it is necessary to convert the program into C++ code and adapt to the network of the ROS-based self-driving system, which makes development inefficient. We used Autoware as ROS-based self-driving system and provided a framework which realizes co-simulation between Autoware and MATLAB/Simulink (CoSAM). CoSAM enables developers to integrate the program created with MATLAB/Simulink into the ROS-based self-driving system without converting into C++ code. Therefore, CoSAM makes the development of the self-driving system easy and efficient. Furthermore, our evaluations of the proposed framework demonstrated its practical potential.
著者
Yasuichi Nakayama Yasushi Kuno Hiroyasu Kakuda
出版者
Information Processing Society of Japan
雑誌
Journal of Information Processing (ISSN:18826652)
巻号頁・発行日
vol.28, pp.733-743, 2020 (Released:2020-11-15)
参考文献数
23
被引用文献数
1

There is a great need to evaluate and/or test programming performance. For this purpose, two schemes have been used. Constructed response (CR) tests let the examinee write programs on a blank sheet (or with a computer keyboard). This scheme can evaluate the programming performance. However, it is difficult to apply in a large volume because skilled human graders are required (automatic evaluation is attempted but not widely used yet). Multiple choice (MC) tests let the examinee choose the correct answer from a list (often corresponding to the “hidden” portion of a complete program). This scheme can be used in a large volume with computer-based testing or mark-sense cards. However, many teachers and researchers are suspicious in that a good score does not necessarily mean the ability to write programs from scratch. We propose a third method, split-paper (SP) testing. Our scheme splits a correct program into each of its lines, shuffles the lines, adds “wrong answer” lines, and prepends them with choice symbols. The examinee answers by using a list of choice symbols corresponding to the correct program, which can be easily graded automatically by using computers. In particular, we propose the use of edit distance (Levenshtein distance) in the scoring scheme, which seems to have affinity with the SP scheme. The research question is whether SP tests scored by using an edit-distance-based scoring scheme measure programming performance as do CR tests. Therefore, we conducted an experiment by using college programming classes with 60 students to compare SP tests against CR tests. As a result, SP and CR test scores are correlated for multiple settings, and the results were statistically significant. Therefore, we might conclude that SP tests with automatic scoring using edit distance are useful tools for evaluating the programming performance.
著者
Geeta Yadav Kolin Paul Alaa Allakany Koji Okamura
出版者
Information Processing Society of Japan
雑誌
Journal of Information Processing (ISSN:18826652)
巻号頁・発行日
vol.28, pp.633-642, 2020 (Released:2020-09-15)
参考文献数
19
被引用文献数
8

The lack of inbuilt security protocols in cheap and resource-constrained Internet of Things (IoT) devices give privilege to an attacker to exploit these device's vulnerabilities and break into the target device. Attacks like Mirai, Wannacry, Stuxnet, etc. show that a cyber-attack often comprises of a series of exploitations of victim device's vulnerabilities. Timely detection and patching of these vulnerabilities can avoid future attacks. Penetration testing helps to identify such vulnerabilities. However, traditional penetration testing methods are not End-to-End, which fail to detect multi-hosts and multi-stages attacks. Even if an individual system is secure under some threat model, the attacker can use a kill-chain to reach the target system. In this paper, we introduced first-of-its-kind, IoT-PEN, a Penetration Testing Framework for IoT. The framework follows a client-server architecture wherein all IoT nodes act as clients and “a system with resources” as a server. IoT-PEN is an End-to-End, scalable, flexible and automatic penetration testing framework for discovering all possible ways an attacker can breach the target system using target-graphs. Finally, the paper recommends patch prioritization order by identifying critical nodes, critical paths for efficient patching. Our analysis shows that IoT-PEN is easily scalable to large and complex IoT networks.
著者
Motoki Amagasaki Hiroki Oyama Yuichiro Fujishiro Masahiro Iida Hiroaki Yasuda Hiroto Ito
出版者
Information Processing Society of Japan
雑誌
IPSJ Transactions on System LSI Design Methodology (ISSN:18826687)
巻号頁・発行日
vol.13, pp.69-71, 2020 (Released:2020-08-13)
参考文献数
7

Graph neural networks are a type of deep-learning model for classification of graph domains. To infer arithmetic functions in a netlist, we applied relational graph convolutional networks (R-GCN), which can directly treat relations between nodes and edges. However, because original R-GCN supports only for node level labeling, it cannot be directly used to infer set of functions in a netlist. In this paper, by considering the distribution of labels for each node, we show a R-GCN based function inference method and data augmentation technique for netlist having multiple functions. According to our result, 91.4% accuracy is obtained from 1, 000 training data, thus demonstrating that R-GCN-based methods can be effective for graphs with multiple functions.
著者
Kosuke Nakamura Takashi Nose Yuya Chiba Akinori Ito
出版者
Information Processing Society of Japan
雑誌
Journal of Information Processing (ISSN:18826652)
巻号頁・発行日
vol.28, pp.248-257, 2020 (Released:2020-04-15)
参考文献数
33

In this paper, we deal with melody completion, a technique which smoothly completes partially-masked melodies. Melody completion can be used to help people compose or arrange pieces of music in several ways, such as editing existing melodies or connecting two other melodies. In recent years, various methods have been proposed for realizing high-quality completion via neural networks. Therefore, in this research, we examine a method of melody completion based on an image completion network. We represent melodies as images and train a completion network to complete those images. The completion network consists of convolution layers and is trained in the framework of generative adversarial networks. We also consider chord progression from musical pieces as conditions. From the experimental result, it was confirmed that the network could generate original melody as a completion result and the quality of the generated melody was not significantly worse than the result of a simple example-based melody completion method.
著者
Christoph M. Wilk Shigeki Sagayama
出版者
Information Processing Society of Japan
雑誌
Journal of Information Processing (ISSN:18826652)
巻号頁・発行日
vol.27, pp.693-700, 2019 (Released:2019-11-15)
参考文献数
30
被引用文献数
1

This paper proposes automatic music completion - the automatic generation of music pieces from any incomplete fragments of music - as a new class of music composition assistance tasks. This is a generalization of conventional music information problems such as automatic melody generation and harmonization. The goal is to turn musical ideas of a user into music pieces, allowing users to quickly explore new ideas and enabling inexperienced users to create their own music. This principle is applicable to a wide variety of music, and as a first step, we present a system that automatically fills in missing parts of a four-part chorale, as well as the underlying harmony progression. The user can input any combination of melody fragments, and freely constrain the harmony. Our system searches for harmonies and melodies that adhere to music-theoretical principles, which requires extensive knowledge and practice for human composers. Accounting for the mutual influence of melodic and harmonic development in music composition, the system is based on a joint model of harmony and voicing. The system was evaluated by analyzing generated music with respect to music theory, in addition to a subjective evaluation experiment. The readers are invited to experiment with our system at http://160.16.202.131/music_completion.
著者
Yuya Kono Hideyuki Kawabata Tetsuo Hironaka
出版者
Information Processing Society of Japan
雑誌
Journal of Information Processing (ISSN:18826652)
巻号頁・発行日
vol.27, pp.87-94, 2019 (Released:2019-01-15)
参考文献数
12

The type class mechanism, which introduces ad-hoc polymorphism into programming languages, is commonly used to realize overloading. However, this forces programmers to write many type annotations in their programs to resolve ambiguous types. Haskell's type defaulting rules reduce requirements for annotation. Furthermore, the widely used Glasgow Haskell Compiler (GHC) has an ExtendedDefaultRules (EDR) extension that facilitates interactive sessions so that the programmer avoids problems that frequently occur when using values like [] and Nothing. However, the GHC EDR extension sometimes replaces type variables with inappropriate types, so that, for example, the term show.read that is determined to have type String -> String under the GHC EDR extension does not exhibit any meaningful behavior because the function read in the term is considered to have type String -> (). We present a flexible way of resolving ambiguous types that alleviates this problem. Our proposed method does not depend on default types defined elsewhere but rather assigns a type to a type variable only when the candidate is unique. It works with any type and type class constraints. The type to be assigned is determined by scanning a list of existing type class instances that meet the type class constraints. This decision is lightweight as it is based on operations over sets without using algorithms that require backtracking. Our method is preferable to using the GHC EDR extension since it avoids the use of unnatural type variable assignments. In this paper, we describe the details of our method. We also discuss our prototype implementation that is based on the GHC plugins, and the feasibility of modifying GHC to incorporate our method.
著者
Katsuhiro Ueno Atsushi Ohori
出版者
Information Processing Society of Japan
雑誌
Journal of Information Processing (ISSN:18826652)
巻号頁・発行日
vol.24, no.1, pp.141-151, 2016 (Released:2016-01-15)
参考文献数
11

This paper presents a scheme comprising a type system and a type-directed compilation method that enables users to integrate high-level key-value store (KVS) operations into statically typed polymorphic functional languages such as Standard ML. KVS has become an important building block for cloud applications because of its scalability. The proposed scheme will enhance the productivity and program safety of KVS by eliminating the need for low-level string manipulation. A prototype that demonstrates its feasibility has been implemented in the SML# language and clarifies issues that need to be resolved in further development towards better practical performance.
著者
Iwata Satoshi Kono Kenji
出版者
Information Processing Society of Japan
雑誌
IPSJ Online Transactions (ISSN:18826660)
巻号頁・発行日
vol.5, pp.1-12, 2012

Performance anomalies in web applications are becoming a huge problem and the increasing complexity of modern web applications has made it much more difficult to identify their root causes. The first step toward hunting for root causes is to narrow down suspicious components that cause performance anomalies. However, even this is difficult when several performance anomalies simultaneously occur in a web application; we have to determine if their root causes are the same or not. We propose a novel method that helps us narrow down suspicious components called <i>performance anomaly clustering</i>, which clusters anomalies based on their root causes. If two anomalies are clustered together, they are affected by the same root cause. Otherwise, they are affected by different root causes. The key insight behind our method is that anomaly measurements that are negatively affected by the same root cause deviate similarly from standard measurements. We compute the similarity in deviations from the non-anomalous distribution of measurements, and cluster anomalies based on this similarity. The results from case studies, which were conducted using RUBiS, which is an auction prototype modeled after eBay.com, are encouraging. Our clustering method output clusters crucial in the search for root causes. Guided by the clustering results, we searched for components exclusively used by each cluster and successfully determined suspicious components, such as the Apache web server, Enterprise Beans, and methods in Enterprise Beans. The root causes we found were shortages in network connections, inadequate indices in the database, and incorrect issues with SQLs, and so on.
著者
Wisnu Ananta Kusuma Takashi Ishida Yutaka Akiyama
出版者
Information Processing Society of Japan
雑誌
IPSJ Transactions on Bioinformatics (ISSN:18826679)
巻号頁・発行日
vol.4, pp.21-33, 2011 (Released:2011-11-04)
参考文献数
27
被引用文献数
2 2

De novo DNA sequence assembly is very important in genome sequence analysis. In this paper, we investigated two of the major approaches for de novo DNA sequence assembly of very short reads: overlap-layout-consensus (OLC) and Eulerian path. From that investigation, we developed a new assembly technique by combining the OLC and the Eulerian path methods in a hierarchical process. The contigs yielded by these two approaches were treated as reads and were assembled again to yield longer contigs. We tested our approach using three real very-short-read datasets generated by an Illumina Genome Analyzer and four simulated very-short-read datasets that contained sequencing errors. The sequencing errors were modeled based on Illumina's sequencing technology. As a result, our combined approach yielded longer contigs than those of Edena (OLC) and Velvet (Eulerian path) in various coverage depths and was comparable to SOAPdenovo, in terms of N50 size and maximum contig lengths. The assembly results were also validated by comparing contigs that were produced by assemblers with their reference sequence from an NCBI database. The results show that our approach produces more accurate results than Velvet, Edena, and SOAPdenovo alone. This comparison indicates that our approach is a viable way to assemble very short reads from next generation sequencers.
著者
Christian Damsgaard Jensen Povilas Pilkauskas Thomas Lefévre
出版者
Information Processing Society of Japan
雑誌
Journal of Information Processing (ISSN:18826652)
巻号頁・発行日
vol.19, pp.345-363, 2011 (Released:2011-07-06)
参考文献数
24
被引用文献数
1

The Wikipedia is a web-based encyclopedia, written and edited collaboratively by Internet users. The Wikipedia has an extremely open editorial policy that allows anybody, to create or modify articles. This has promoted a broad and detailed coverage of subjects, but also introduced problems relating to the quality of articles. The Wikipedia Recommender System (WRS) was developed to help users determine the credibility of articles based on feedback from other Wikipedia users. The WRS implements a collaborative filtering system with trust metrics, i.e., it provides a rating of articles which emphasizes feedback from recommenders that the user has agreed with in the past. This exposes the problem that most recommenders are not equally competent in all subject areas. The first WRS prototype did not include an evaluation of the areas of expertise of recommenders, so the trust metric used in the article ratings reflected the average competence of recommenders across all subject areas. We have now developed a new version of the WRS, which evaluates the expertise of recommenders within different subject areas. In order to do this, we need to identify a way to classify the subject area of all the articles in the Wikipedia. In this paper, we examine different ways to classify the subject area of Wikipedia article according to well established knowledge classification schemes. We identify a number of requirements that a classification scheme must meet in order to be useful in the context of the WRS and present an evaluation of four existing knowledge classification schemes with respect to these requirements. This evaluation helped us identify a classification scheme, which we have implemented in the current version of the Wikipedia Recommender System.
著者
Yoshihiro Tsuboki Tomoya Kawakami Satoru Matsumoto Tomoki Yoshihisa Yuuichi Teranishi
出版者
Information Processing Society of Japan
雑誌
Journal of Information Processing (ISSN:18826652)
巻号頁・発行日
vol.31, pp.758-765, 2023 (Released:2023-11-15)
参考文献数
18

Recent technological advances in Virtual Reality (VR) and Augmented Reality (AR) enable users to experience a high-quality virtual world. The AR technology is attracting attention in various fields and is also used in the entertainment field such as museums. However, the existing AR technology generally requires specialized sensors such as Light Detection And Ranging (LiDAR) sensors and feature points, which require cost in terms of time and money. The authors have proposed a real-time background removal method and an AR system based on the estimated depth of the captured image to provide a virtual space experience using mobile devices such as smartphones. This paper describes an AR virtual space system that dynamically changes the replaced background based on motion information transmitted from the user's device.
著者
Kazuki Nomoto Takuya Watanabe Eitaro Shioji Mitsuaki Akiyama Tatsuya Mori
出版者
Information Processing Society of Japan
雑誌
Journal of Information Processing (ISSN:18826652)
巻号頁・発行日
vol.31, pp.620-642, 2023 (Released:2023-09-15)
参考文献数
80

Modern Web services provide advanced features by utilizing hardware resources on the user's device. Web browsers implement a user consent-based permission model to protect user privacy. In this study, we developed PERMIUM, a web browser analysis framework that automatically analyzes the behavior of permission mechanisms implemented by various browsers. We systematically studied the behavior of permission mechanisms for 22 major browser implementations running on five different operating systems. We found fragmented implementations. Implementations between browsers running on different operating systems are not always identical. We determined that implementation inconsistencies could lead to privacy risks. We identified gaps between browser permission implementations and user perceptions from the user study corresponding to the analyses using PERMIUM. Based on the implementation inconsistencies, we developed two proof-of-concept attacks and evaluated their feasibility. The first attack uses permission information to secretly track the user. The second attack aims to create a situation in which the user cannot correctly determine the origin of the permission request and the user mistakenly grants permission. Finally, we clarify the technical issues that must be standardized in privacy mechanisms and provide recommendations to OS/browser vendors to mitigate the threats identified in this study.
著者
Tsutomu Matsumoto Junichi Sakamoto Manami Suzuki Dai Watanabe Naoki Yoshida
出版者
Information Processing Society of Japan
雑誌
Journal of Information Processing (ISSN:18826652)
巻号頁・発行日
vol.31, pp.700-707, 2023 (Released:2023-09-15)
参考文献数
25

The RAM encryption encrypts the data on memory to prevent data leakage from an adversary to eavesdrop the memory space of the target program. The well-known implementation is Intel SGX, whose RAM encryption mechanism is definitely hardware dependent. In contrast, Watanabe et al. proposed a fully software-based RAM encryption scheme (SBRES). In this paper, we developed the tools for embedding the SBRES in C source codes for its practical application. We applied the tools to the source codes of some cryptographic implementations in Mbed TLS and confirmed that the tools successfully embedded the SBRES functionality in the cryptographic implementations.
著者
Hayato Kimura Keita Emura Takanori Isobe Ryoma Ito Kazuto Ogawa Toshihiro Ohigashi
出版者
Information Processing Society of Japan
雑誌
Journal of Information Processing (ISSN:18826652)
巻号頁・発行日
vol.31, pp.550-561, 2023 (Released:2023-09-15)
参考文献数
40

Cryptanalysis in a blackbox setting using deep learning is powerful because it does not require the attacker to have knowledge about the internal structure of the cryptographic algorithm. Thus, it is necessary to design a symmetric key cipher that is secure against cryptanalysis using deep learning. Kimura et al. (AIoTS 2022) investigated deep learning-based attacks on the small PRESENT-[4] block cipher with limited component changes, identifying characteristics specific to these attacks which remain unaffected by linear/differential cryptanalysis. Finding such characteristics is important because exploiting such characteristics can make the target cipher vulnerable to deep learning-based attacks. Thus, this paper extends a previous method to explore clues for designing symmetric-key cryptographic algorithms that are secure against deep learning-based attacks. We employ small PRESENT-[4] with two weak S-boxes, which are known to be weak against differential/linear attacks, to clarify the relationship between classical and deep learning-based attacks. As a result, we demonstrated the success probability of our deep learning-based whitebox analysis tends to be affected by the success probability of classical cryptanalysis methods. And we showed our whitebox analysis achieved the same attack capability as traditional methods even when the S-box of the target cipher was changed to a weak one.
著者
Shohei Mori Satoshi Hashiguchi Fumihisa Shibata Asako Kimura
出版者
Information Processing Society of Japan
雑誌
Journal of Information Processing (ISSN:18826652)
巻号頁・発行日
vol.31, pp.392-403, 2023 (Released:2023-06-15)
参考文献数
27

Point & teleport (P&T) is an artificial locomotion technique that enables users to travel in unlimited space in virtual reality. While recent P&T techniques assign orientation control to an additional axis, these techniques suffer from increased complexity in controls and limited performance. Researchers concluded that teleportation, followed by a self-orientation adjustment by physically turning around, is preferable and that P&T with orientation specification can be optional. However, P&T has not been tested under a seated condition, where the orientation control may advantageously perform. Therefore, in this paper, we reevaluate P&T with orientation specifications while the users are seated. Nonetheless, for consistent alignment with the results in preceding research, we evaluate the accuracy while the users are standing. Knowing that additional cognitive load may badly affect the performance, we present a new P&T design, points to teleport (P2T), with minimal complexity in mind (i.e., point twice sequentially to determine the future location and then orientation, which mimics classic P&T that requires users to turn around for orientation specification). Thus, we revisit P&T with orientation specification.