Generating summary sentences using Adversarially Regularized Autoencoders with conditional context

Hyesoo Kong, Wooju Kim

Research output: Contribution to journalArticle

Abstract

Abstractive summarization is challenging problem, especially abstractive summarization based on unsupervised learning because it must generate whole, unique sentences. In the real world, companies use abstractive summarization to understand customer feedbacks. In many cases, this work is done by humans and so is expensive in terms of time and money. Therefore, there is an increasing demand for machine learning-based abstractive summarization systems. However, most previous abstractive summarization studies were of supervised models. In this paper, we proposed novel abstractive summarization methods that can be trained unsupervisedly. One of the proposed methods is based on Adversarially Regularized Autoencoder(ARAE) model, but abstractive summary generation method for each cluster of similar customers’ reviews, is newly proposed. We further proposed Conditional Adversarially Regularized Autoencoder(CARAE) model which is similar to the ARAE model but with the addition of condition nodes so that additional information about the cluster can be used during summarization. We first performed summary experiments based on Korean and additionally performed experiments on English. In the experiments, we set up some comparison models and used ROUGE and BLEU to evaluate our proposed models’ performance. Overall, our proposed models outperformed the comparison models and CARAE model performed better than the ARAE model.

Original languageEnglish
Pages (from-to)1-11
Number of pages11
JournalExpert Systems with Applications
Volume130
DOIs
Publication statusPublished - 2019 Sep 15

Fingerprint

Unsupervised learning
Experiments
Learning systems
Feedback
Industry

All Science Journal Classification (ASJC) codes

  • Engineering(all)
  • Computer Science Applications
  • Artificial Intelligence

Cite this

@article{c9fc0c10479e44668aff0a246af91d07,
title = "Generating summary sentences using Adversarially Regularized Autoencoders with conditional context",
abstract = "Abstractive summarization is challenging problem, especially abstractive summarization based on unsupervised learning because it must generate whole, unique sentences. In the real world, companies use abstractive summarization to understand customer feedbacks. In many cases, this work is done by humans and so is expensive in terms of time and money. Therefore, there is an increasing demand for machine learning-based abstractive summarization systems. However, most previous abstractive summarization studies were of supervised models. In this paper, we proposed novel abstractive summarization methods that can be trained unsupervisedly. One of the proposed methods is based on Adversarially Regularized Autoencoder(ARAE) model, but abstractive summary generation method for each cluster of similar customers’ reviews, is newly proposed. We further proposed Conditional Adversarially Regularized Autoencoder(CARAE) model which is similar to the ARAE model but with the addition of condition nodes so that additional information about the cluster can be used during summarization. We first performed summary experiments based on Korean and additionally performed experiments on English. In the experiments, we set up some comparison models and used ROUGE and BLEU to evaluate our proposed models’ performance. Overall, our proposed models outperformed the comparison models and CARAE model performed better than the ARAE model.",
author = "Hyesoo Kong and Wooju Kim",
year = "2019",
month = "9",
day = "15",
doi = "10.1016/j.eswa.2019.04.014",
language = "English",
volume = "130",
pages = "1--11",
journal = "Expert Systems with Applications",
issn = "0957-4174",
publisher = "Elsevier Limited",

}

Generating summary sentences using Adversarially Regularized Autoencoders with conditional context. / Kong, Hyesoo; Kim, Wooju.

In: Expert Systems with Applications, Vol. 130, 15.09.2019, p. 1-11.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Generating summary sentences using Adversarially Regularized Autoencoders with conditional context

AU - Kong, Hyesoo

AU - Kim, Wooju

PY - 2019/9/15

Y1 - 2019/9/15

N2 - Abstractive summarization is challenging problem, especially abstractive summarization based on unsupervised learning because it must generate whole, unique sentences. In the real world, companies use abstractive summarization to understand customer feedbacks. In many cases, this work is done by humans and so is expensive in terms of time and money. Therefore, there is an increasing demand for machine learning-based abstractive summarization systems. However, most previous abstractive summarization studies were of supervised models. In this paper, we proposed novel abstractive summarization methods that can be trained unsupervisedly. One of the proposed methods is based on Adversarially Regularized Autoencoder(ARAE) model, but abstractive summary generation method for each cluster of similar customers’ reviews, is newly proposed. We further proposed Conditional Adversarially Regularized Autoencoder(CARAE) model which is similar to the ARAE model but with the addition of condition nodes so that additional information about the cluster can be used during summarization. We first performed summary experiments based on Korean and additionally performed experiments on English. In the experiments, we set up some comparison models and used ROUGE and BLEU to evaluate our proposed models’ performance. Overall, our proposed models outperformed the comparison models and CARAE model performed better than the ARAE model.

AB - Abstractive summarization is challenging problem, especially abstractive summarization based on unsupervised learning because it must generate whole, unique sentences. In the real world, companies use abstractive summarization to understand customer feedbacks. In many cases, this work is done by humans and so is expensive in terms of time and money. Therefore, there is an increasing demand for machine learning-based abstractive summarization systems. However, most previous abstractive summarization studies were of supervised models. In this paper, we proposed novel abstractive summarization methods that can be trained unsupervisedly. One of the proposed methods is based on Adversarially Regularized Autoencoder(ARAE) model, but abstractive summary generation method for each cluster of similar customers’ reviews, is newly proposed. We further proposed Conditional Adversarially Regularized Autoencoder(CARAE) model which is similar to the ARAE model but with the addition of condition nodes so that additional information about the cluster can be used during summarization. We first performed summary experiments based on Korean and additionally performed experiments on English. In the experiments, we set up some comparison models and used ROUGE and BLEU to evaluate our proposed models’ performance. Overall, our proposed models outperformed the comparison models and CARAE model performed better than the ARAE model.

UR - http://www.scopus.com/inward/record.url?scp=85064216391&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85064216391&partnerID=8YFLogxK

U2 - 10.1016/j.eswa.2019.04.014

DO - 10.1016/j.eswa.2019.04.014

M3 - Article

AN - SCOPUS:85064216391

VL - 130

SP - 1

EP - 11

JO - Expert Systems with Applications

JF - Expert Systems with Applications

SN - 0957-4174

ER -