- 著者
-
石井 友
松岡 昌志
牧 紀男
堀江 啓
田中 聡
- 出版者
- 日本建築学会
- 雑誌
- 日本建築学会構造系論文集 (ISSN:13404202)
- 巻号頁・発行日
- no.751, pp.1391-1400, 2018-09
- 被引用文献数
-
9
If a disaster such as an earthquake occurs, buildings will suffer damages, including residential houses and public facilities. An investigation of damaged buildings is very important in disaster areas because we use such data to make decisions for the implementation of disaster management and restoration plans. However, in the event of a large-scale disaster, conducting a detailed survey has several problems. The number of buildings to be covered will increase, manpower will be insuffficient, the burden on workers will increase, restoration will take time and will be delayed. Therefore, there is a need for a quick and accurate method of investigating building damages.<br><br> In this study, we allowed a CNN (convolutional neural network) to learn the local and aerial photographs of the 1995 Kobe earthquake and verified the possibility of assessing building damages in the CNN based on the learning curve and discrimination accuracy. The Nishinomiya Built Environment Database, which contained damage certificate data, aerial and field photographs, and their shooting points, was used for analysis. In the Nishinomiya city's damage certificate data, the damaged buildings were classified into four classes: “severe,” “moderate,” “slight,” and “undamaged.” However, in the present study, three classes—moderate, slight, and undamaged—were merged into a single class for simplicity, such that we had a two class classification problem, that is, “severe” and “others.”<br><br> First, when we created a data set using the damage certificate data, and aerial and field photographs, and allowed the CNN to learn them, a state called over-fitting was created, which made normal learning more difficult. However, as a result of countermeasures called data incrimination, we were able to obtain a estimation accuracy of approximately 63.6% in the aerial photographs and 73.6% in the field photographs. Since the decrease in the accuracy is due to building internal damages, we should also include the possibility of such damages that could not be assessed from the appearance alone, and of the images of damaged buildings from outside the target building; therefore, we investigated and verified the damaged buildings again based on the “images of damaged buildings evaluated by visual interpretation.” Then, it became clear that the damaged buildings can be identified with an accuracy of 86.0% in the aerial photographs and 83.0% in the field photographs. Furthermore, in the field photographs, it became clear that collapsed buildings can be distinguished with a high accuracy of 98.5%.<br><br> From the above results, it was found that it is possible to assess the condition of damaged buildings by deep learning using field and aerial photographs taken in the affected area after the earthquake; however, the damage that can be identified with the highest accuracy is limited to the photographs of collapsed buildings. In our future research, we plan to correctly identify the difference between “moderate” and “slight” damaged buildings.