Abstract
This study proposes a vision-based method for flood depth estimation using flooded-vehicle images with a ground-level view. The proposed method is comprised of three main processes: segmentation of vehicle objects, cross-domain image retrieval, and estimation of flood depth. First, Mask region-based convolution neural network (R-CNN) is used to detect flooded vehicles in flooding images. Second, on the basis of feature maps from VGGNets, dynamic feature space selection is employed to select a three-dimensional (3D) rendered car image most similar to the flooded object using the metric of cosine distance. Finally, the flood depth is calculated through a comparison of the flooded object and the 3D rendered image. The feature maps from Pooling layer 4 of VGG19, under the condition of a cosine distance of <0.55, produces an average error of 7.51 pixels, corresponding to 9.40% of the tire height. A total of 500 flooding images are used to validate the method.
Original language | English |
---|---|
Article number | 04020072 |
Journal | Journal of Computing in Civil Engineering |
Volume | 35 |
Issue number | 2 |
DOIs | |
Publication status | Published - 2021 Mar 1 |
Bibliographical note
Funding Information:This work was supported by National Research Foundation of Korea (NRF) grants from the Ministry of Science and ICT (Grant No. 2018R1A2B2008600) and the Ministry of Education (Grant No. 2018R1A6A1A08025348).
Publisher Copyright:
© 2020 American Society of Civil Engineers.
All Science Journal Classification (ASJC) codes
- Civil and Structural Engineering
- Computer Science Applications