Semantic-aware neural style transfer

Joo Hyun Park, Song Park, Hyunjung Shim

Research output: Contribution to journalArticle

Abstract

This study proposes a semantic-aware style transfer method for resolving semantic mismatch problems in existing algorithms. As the primary focus of this study, the consideration of semantic matching is expected to improve the quality of artistic style transfer. Here, each image is partitioned into several semantic regions for both a target photograph and a source painting. All partitioned regions of the target are then associated with one of the partitioned regions in the source according to their semantic interpretation. Given a pair of target and source regions, style is learned from the source region whereas content is learned from the target region. By integrating both the style and content components, we can successfully generate a stylized output. Unlike previous approaches, we obtain the best semantic match between regions using word embeddings. Thus, we guarantee that semantic matching is always established between the target and source. Moreover, it is unreliable to partition a painting using existing algorithms because of statistical gaps between the real photographs and paintings. To bridge such gaps, we apply a domain adaptation technique on the source painting to extract its semantic regions. We evaluated the effectiveness of the proposed algorithm based on a thorough experimental analysis and comparison. Through a user study, it is confirmed that semantic information considerably influences the quality assessment of style transfer.

Original languageEnglish
Pages (from-to)13-23
Number of pages11
JournalImage and Vision Computing
Volume87
DOIs
Publication statusPublished - 2019 Jul 1

Fingerprint

Semantics
Painting

All Science Journal Classification (ASJC) codes

  • Signal Processing
  • Computer Vision and Pattern Recognition

Cite this

Park, Joo Hyun ; Park, Song ; Shim, Hyunjung. / Semantic-aware neural style transfer. In: Image and Vision Computing. 2019 ; Vol. 87. pp. 13-23.
@article{eeee2c9e5e7a4c9896f680eeb106c425,
title = "Semantic-aware neural style transfer",
abstract = "This study proposes a semantic-aware style transfer method for resolving semantic mismatch problems in existing algorithms. As the primary focus of this study, the consideration of semantic matching is expected to improve the quality of artistic style transfer. Here, each image is partitioned into several semantic regions for both a target photograph and a source painting. All partitioned regions of the target are then associated with one of the partitioned regions in the source according to their semantic interpretation. Given a pair of target and source regions, style is learned from the source region whereas content is learned from the target region. By integrating both the style and content components, we can successfully generate a stylized output. Unlike previous approaches, we obtain the best semantic match between regions using word embeddings. Thus, we guarantee that semantic matching is always established between the target and source. Moreover, it is unreliable to partition a painting using existing algorithms because of statistical gaps between the real photographs and paintings. To bridge such gaps, we apply a domain adaptation technique on the source painting to extract its semantic regions. We evaluated the effectiveness of the proposed algorithm based on a thorough experimental analysis and comparison. Through a user study, it is confirmed that semantic information considerably influences the quality assessment of style transfer.",
author = "Park, {Joo Hyun} and Song Park and Hyunjung Shim",
year = "2019",
month = "7",
day = "1",
doi = "10.1016/j.imavis.2019.04.001",
language = "English",
volume = "87",
pages = "13--23",
journal = "Image and Vision Computing",
issn = "0262-8856",
publisher = "Elsevier Limited",

}

Semantic-aware neural style transfer. / Park, Joo Hyun; Park, Song; Shim, Hyunjung.

In: Image and Vision Computing, Vol. 87, 01.07.2019, p. 13-23.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Semantic-aware neural style transfer

AU - Park, Joo Hyun

AU - Park, Song

AU - Shim, Hyunjung

PY - 2019/7/1

Y1 - 2019/7/1

N2 - This study proposes a semantic-aware style transfer method for resolving semantic mismatch problems in existing algorithms. As the primary focus of this study, the consideration of semantic matching is expected to improve the quality of artistic style transfer. Here, each image is partitioned into several semantic regions for both a target photograph and a source painting. All partitioned regions of the target are then associated with one of the partitioned regions in the source according to their semantic interpretation. Given a pair of target and source regions, style is learned from the source region whereas content is learned from the target region. By integrating both the style and content components, we can successfully generate a stylized output. Unlike previous approaches, we obtain the best semantic match between regions using word embeddings. Thus, we guarantee that semantic matching is always established between the target and source. Moreover, it is unreliable to partition a painting using existing algorithms because of statistical gaps between the real photographs and paintings. To bridge such gaps, we apply a domain adaptation technique on the source painting to extract its semantic regions. We evaluated the effectiveness of the proposed algorithm based on a thorough experimental analysis and comparison. Through a user study, it is confirmed that semantic information considerably influences the quality assessment of style transfer.

AB - This study proposes a semantic-aware style transfer method for resolving semantic mismatch problems in existing algorithms. As the primary focus of this study, the consideration of semantic matching is expected to improve the quality of artistic style transfer. Here, each image is partitioned into several semantic regions for both a target photograph and a source painting. All partitioned regions of the target are then associated with one of the partitioned regions in the source according to their semantic interpretation. Given a pair of target and source regions, style is learned from the source region whereas content is learned from the target region. By integrating both the style and content components, we can successfully generate a stylized output. Unlike previous approaches, we obtain the best semantic match between regions using word embeddings. Thus, we guarantee that semantic matching is always established between the target and source. Moreover, it is unreliable to partition a painting using existing algorithms because of statistical gaps between the real photographs and paintings. To bridge such gaps, we apply a domain adaptation technique on the source painting to extract its semantic regions. We evaluated the effectiveness of the proposed algorithm based on a thorough experimental analysis and comparison. Through a user study, it is confirmed that semantic information considerably influences the quality assessment of style transfer.

UR - http://www.scopus.com/inward/record.url?scp=85065538085&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85065538085&partnerID=8YFLogxK

U2 - 10.1016/j.imavis.2019.04.001

DO - 10.1016/j.imavis.2019.04.001

M3 - Article

AN - SCOPUS:85065538085

VL - 87

SP - 13

EP - 23

JO - Image and Vision Computing

JF - Image and Vision Computing

SN - 0262-8856

ER -