MOSnet: Moving object segmentation with convolutional networks

J. Jeong, T. S. Yoon, Jin Bae Park

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

Identifying moving objects is considered a difficult problem owing to camera motion, motion blur and appearance changes. To solve these problems, a moving object segmentation method based on a convolutional neural network is presented. The proposed network takes successive image pairs as input, and predicts the per-pixel motion status. This process consists of three streams: One that learns appearance features, another that learns motion features and a third that combines both features. Therefore, a joint model is learned for segmenting a moving object, because appearance and motion features complement each other. Experimental results, based on a challenging dataset, demonstrate that the proposed method has superior performance over stateof- the-art methods, with respect to intersection over union and F-measure scores.

Original languageEnglish
Pages (from-to)136-138
Number of pages3
JournalElectronics Letters
Volume54
Issue number3
DOIs
Publication statusPublished - 2018 Feb 8

Fingerprint

Pixels
Cameras
Neural networks

All Science Journal Classification (ASJC) codes

  • Electrical and Electronic Engineering

Cite this

Jeong, J. ; Yoon, T. S. ; Park, Jin Bae. / MOSnet : Moving object segmentation with convolutional networks. In: Electronics Letters. 2018 ; Vol. 54, No. 3. pp. 136-138.
@article{e9255616953d41d29e13222fd6b8db15,
title = "MOSnet: Moving object segmentation with convolutional networks",
abstract = "Identifying moving objects is considered a difficult problem owing to camera motion, motion blur and appearance changes. To solve these problems, a moving object segmentation method based on a convolutional neural network is presented. The proposed network takes successive image pairs as input, and predicts the per-pixel motion status. This process consists of three streams: One that learns appearance features, another that learns motion features and a third that combines both features. Therefore, a joint model is learned for segmenting a moving object, because appearance and motion features complement each other. Experimental results, based on a challenging dataset, demonstrate that the proposed method has superior performance over stateof- the-art methods, with respect to intersection over union and F-measure scores.",
author = "J. Jeong and Yoon, {T. S.} and Park, {Jin Bae}",
year = "2018",
month = "2",
day = "8",
doi = "10.1049/el.2017.3982",
language = "English",
volume = "54",
pages = "136--138",
journal = "Electronics Letters",
issn = "0013-5194",
publisher = "Institution of Engineering and Technology",
number = "3",

}

MOSnet : Moving object segmentation with convolutional networks. / Jeong, J.; Yoon, T. S.; Park, Jin Bae.

In: Electronics Letters, Vol. 54, No. 3, 08.02.2018, p. 136-138.

Research output: Contribution to journalArticle

TY - JOUR

T1 - MOSnet

T2 - Moving object segmentation with convolutional networks

AU - Jeong, J.

AU - Yoon, T. S.

AU - Park, Jin Bae

PY - 2018/2/8

Y1 - 2018/2/8

N2 - Identifying moving objects is considered a difficult problem owing to camera motion, motion blur and appearance changes. To solve these problems, a moving object segmentation method based on a convolutional neural network is presented. The proposed network takes successive image pairs as input, and predicts the per-pixel motion status. This process consists of three streams: One that learns appearance features, another that learns motion features and a third that combines both features. Therefore, a joint model is learned for segmenting a moving object, because appearance and motion features complement each other. Experimental results, based on a challenging dataset, demonstrate that the proposed method has superior performance over stateof- the-art methods, with respect to intersection over union and F-measure scores.

AB - Identifying moving objects is considered a difficult problem owing to camera motion, motion blur and appearance changes. To solve these problems, a moving object segmentation method based on a convolutional neural network is presented. The proposed network takes successive image pairs as input, and predicts the per-pixel motion status. This process consists of three streams: One that learns appearance features, another that learns motion features and a third that combines both features. Therefore, a joint model is learned for segmenting a moving object, because appearance and motion features complement each other. Experimental results, based on a challenging dataset, demonstrate that the proposed method has superior performance over stateof- the-art methods, with respect to intersection over union and F-measure scores.

UR - http://www.scopus.com/inward/record.url?scp=85041641839&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85041641839&partnerID=8YFLogxK

U2 - 10.1049/el.2017.3982

DO - 10.1049/el.2017.3982

M3 - Article

AN - SCOPUS:85041641839

VL - 54

SP - 136

EP - 138

JO - Electronics Letters

JF - Electronics Letters

SN - 0013-5194

IS - 3

ER -