Various devices that are used indoors require information regarding the user's position and orientation. This information enables the devices to offer the user customized and more relevant information. This study presents a new image-based indoor localization method using building information modeling (BIM) and convolutional neural networks (CNNs). This method constructs a dataset with rendered BIM images and searches the dataset for images most similar to indoor photographs, thereby estimating the indoor position and orientation of the photograph. A pretrained CNN (the VGG network) is used for image feature extraction for the similarity evaluation of two different types of images (BIM rendered and real images). Experiments were performed in real buildings to verify the method, and the matching accuracy is 91.61% for a total of 143 images. The results also confirm that pooling layer 4 in the VGG network is best suited for feature selection.
Bibliographical noteFunding Information:
This work was supported by a grant from the National Research Foundation of Korea funded by the Korean government (MSIP) (No. 2018R1A2B2008600 ).
© 2018 Elsevier Ltd
All Science Journal Classification (ASJC) codes
- Environmental Engineering
- Civil and Structural Engineering
- Geography, Planning and Development
- Building and Construction