Towards a meaningful 3D map using a 3D lidar and a camera

Jongmin Jeong, Tae Sung Yoon, Jin Bae Park

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

Semantic 3D maps are required for various applications including robot navigation and surveying, and their importance has significantly increased. Generally, existing studies on semantic mapping were camera-based approaches that could not be operated in large-scale environments owing to their computational burden. Recently, a method of combining a 3D Lidar with a camera was introduced to address this problem, and a 3D Lidar and a camera were also utilized for semantic 3D mapping. In this study, our algorithm consists of semantic mapping and map refinement. In the semantic mapping, a GPS and an IMU are integrated to estimate the odometry of the system, and subsequently, the point clouds measured from a 3D Lidar are registered by using this information. Furthermore, we use the latest CNN-based semantic segmentation to obtain semantic information on the surrounding environment. To integrate the point cloud with semantic information, we developed incremental semantic labeling including coordinate alignment, error minimization, and semantic information fusion. Additionally, to improve the quality of the generated semantic map, the map refinement is processed in a batch. It enhances the spatial distribution of labels and removes traces produced by moving vehicles effectively. We conduct experiments on challenging sequences to demonstrate that our algorithm outperforms state-of-the-art methods in terms of accuracy and intersection over union.

Original languageEnglish
Article number2571
JournalSensors (Switzerland)
Volume18
Issue number8
DOIs
Publication statusPublished - 2018 Aug 6

Fingerprint

semantics
Optical radar
Semantics
optical radar
Cameras
cameras
Robot applications
unions
Information fusion
Surveying
navigation
robots
intersections
Labeling
Spatial distribution
marking
Global positioning system
Labels
spatial distribution
vehicles

All Science Journal Classification (ASJC) codes

  • Analytical Chemistry
  • Atomic and Molecular Physics, and Optics
  • Biochemistry
  • Instrumentation
  • Electrical and Electronic Engineering

Cite this

Jeong, Jongmin ; Yoon, Tae Sung ; Park, Jin Bae. / Towards a meaningful 3D map using a 3D lidar and a camera. In: Sensors (Switzerland). 2018 ; Vol. 18, No. 8.
@article{bff671fdfd3b481baffb0e69a8af13bf,
title = "Towards a meaningful 3D map using a 3D lidar and a camera",
abstract = "Semantic 3D maps are required for various applications including robot navigation and surveying, and their importance has significantly increased. Generally, existing studies on semantic mapping were camera-based approaches that could not be operated in large-scale environments owing to their computational burden. Recently, a method of combining a 3D Lidar with a camera was introduced to address this problem, and a 3D Lidar and a camera were also utilized for semantic 3D mapping. In this study, our algorithm consists of semantic mapping and map refinement. In the semantic mapping, a GPS and an IMU are integrated to estimate the odometry of the system, and subsequently, the point clouds measured from a 3D Lidar are registered by using this information. Furthermore, we use the latest CNN-based semantic segmentation to obtain semantic information on the surrounding environment. To integrate the point cloud with semantic information, we developed incremental semantic labeling including coordinate alignment, error minimization, and semantic information fusion. Additionally, to improve the quality of the generated semantic map, the map refinement is processed in a batch. It enhances the spatial distribution of labels and removes traces produced by moving vehicles effectively. We conduct experiments on challenging sequences to demonstrate that our algorithm outperforms state-of-the-art methods in terms of accuracy and intersection over union.",
author = "Jongmin Jeong and Yoon, {Tae Sung} and Park, {Jin Bae}",
year = "2018",
month = "8",
day = "6",
doi = "10.3390/s18082571",
language = "English",
volume = "18",
journal = "Sensors",
issn = "1424-3210",
publisher = "Multidisciplinary Digital Publishing Institute (MDPI)",
number = "8",

}

Towards a meaningful 3D map using a 3D lidar and a camera. / Jeong, Jongmin; Yoon, Tae Sung; Park, Jin Bae.

In: Sensors (Switzerland), Vol. 18, No. 8, 2571, 06.08.2018.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Towards a meaningful 3D map using a 3D lidar and a camera

AU - Jeong, Jongmin

AU - Yoon, Tae Sung

AU - Park, Jin Bae

PY - 2018/8/6

Y1 - 2018/8/6

N2 - Semantic 3D maps are required for various applications including robot navigation and surveying, and their importance has significantly increased. Generally, existing studies on semantic mapping were camera-based approaches that could not be operated in large-scale environments owing to their computational burden. Recently, a method of combining a 3D Lidar with a camera was introduced to address this problem, and a 3D Lidar and a camera were also utilized for semantic 3D mapping. In this study, our algorithm consists of semantic mapping and map refinement. In the semantic mapping, a GPS and an IMU are integrated to estimate the odometry of the system, and subsequently, the point clouds measured from a 3D Lidar are registered by using this information. Furthermore, we use the latest CNN-based semantic segmentation to obtain semantic information on the surrounding environment. To integrate the point cloud with semantic information, we developed incremental semantic labeling including coordinate alignment, error minimization, and semantic information fusion. Additionally, to improve the quality of the generated semantic map, the map refinement is processed in a batch. It enhances the spatial distribution of labels and removes traces produced by moving vehicles effectively. We conduct experiments on challenging sequences to demonstrate that our algorithm outperforms state-of-the-art methods in terms of accuracy and intersection over union.

AB - Semantic 3D maps are required for various applications including robot navigation and surveying, and their importance has significantly increased. Generally, existing studies on semantic mapping were camera-based approaches that could not be operated in large-scale environments owing to their computational burden. Recently, a method of combining a 3D Lidar with a camera was introduced to address this problem, and a 3D Lidar and a camera were also utilized for semantic 3D mapping. In this study, our algorithm consists of semantic mapping and map refinement. In the semantic mapping, a GPS and an IMU are integrated to estimate the odometry of the system, and subsequently, the point clouds measured from a 3D Lidar are registered by using this information. Furthermore, we use the latest CNN-based semantic segmentation to obtain semantic information on the surrounding environment. To integrate the point cloud with semantic information, we developed incremental semantic labeling including coordinate alignment, error minimization, and semantic information fusion. Additionally, to improve the quality of the generated semantic map, the map refinement is processed in a batch. It enhances the spatial distribution of labels and removes traces produced by moving vehicles effectively. We conduct experiments on challenging sequences to demonstrate that our algorithm outperforms state-of-the-art methods in terms of accuracy and intersection over union.

UR - http://www.scopus.com/inward/record.url?scp=85051412544&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85051412544&partnerID=8YFLogxK

U2 - 10.3390/s18082571

DO - 10.3390/s18082571

M3 - Article

AN - SCOPUS:85051412544

VL - 18

JO - Sensors

JF - Sensors

SN - 1424-3210

IS - 8

M1 - 2571

ER -