Unmanned aerial vehicles (UAVs) can efficiently collect image data representing various situations at construction sites, however, it requires a lot of time and cost to analyze it manually to retrieve useful information for on-site management. The paper proposes a methodology to generate time-spatial and visual context-based information from UAV-acquired data. This methodology generates textual information on the position, status, movement, color, and quantity of construction resources from site images using image captioning. Then, construction site images, text generated from them, and UAV flight data containing the latitude, longitude, date, and time of day, are systemized into a database. For evaluating the proposed methodology, data obtained by UAV at actual construction sites was used. Our methodology could predict textual information with a mean average precision of 43.52%, which is superior to those of existing methods.
|Journal||Automation in Construction|
|Publication status||Published - 2020 Apr|
Bibliographical noteFunding Information:
This work was supported by the Graduate School of YONSEI University Research Scholarship Grants in 2019 and a National Research Foundation of Korea (NRF) grant funded by the Ministry of Science and ICT (No. 2018R1A2B2008600 ) and the Ministry of Education (No. 2018R1A6A1A08025348 ).
© 2020 Elsevier B.V.
All Science Journal Classification (ASJC) codes
- Control and Systems Engineering
- Civil and Structural Engineering
- Building and Construction