Gradient Regularization with Multivariate Distribution of Previous Knowledge for Continual Learning

Tae Heon Kim, Hyung Jun Moon, Sung Bae Cho

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Continual learning is a novel learning setup for an environment where data are introduced sequentially, and a model continually learns new tasks. However, the model forgets the learned knowledge as it learns new classes. There is an approach that keeps a few previous data, but this causes other problems such as overfitting and class imbalance. In this paper, we propose a method that retrains a network with generated representations from an estimated multivariate Gaussian distribution. The representations are the vectors coming from CNN that is trained using a gradient regularization to prevent a distribution shift, allowing the stored means and covariances to create realistic representations. The generated vectors contain every class seen so far, which helps preventing the forgetting. Our 6-fold cross-validation experiment shows that the proposed method outperforms the existing continual learning methods by 1.14%p and 4.60%p in CIFAR10 and CIFAR100, respectively. Moreover, we visualize the generated vectors using t-SNE to confirm the validity of multivariate Gaussian mixture to estimate the distribution of the data representations.

Original languageEnglish
Title of host publicationIntelligent Data Engineering and Automated Learning – IDEAL 2022 - 23rd International Conference, IDEAL 2022, Proceedings
EditorsHujun Yin, David Camacho, Peter Tino
PublisherSpringer Science and Business Media Deutschland GmbH
Pages359-368
Number of pages10
ISBN (Print)9783031217524
DOIs
Publication statusPublished - 2022
Event23rd International Conference on Intelligent Data Engineering and Automated Learning, IDEAL 2022 - Manchester, United Kingdom
Duration: 2022 Nov 242022 Nov 26

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13756 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference23rd International Conference on Intelligent Data Engineering and Automated Learning, IDEAL 2022
Country/TerritoryUnited Kingdom
CityManchester
Period22/11/2422/11/26

Bibliographical note

Funding Information:
This work was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korean government (MSIT) (No. 2020-0-01361, Artificial Intelligence Graduate School Program (Yonsei University); No. 2022-0-00113, Developing a Sustainable Collaborative Multi-modal Lifelong Learning Framework).

Publisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

All Science Journal Classification (ASJC) codes

  • Theoretical Computer Science
  • Computer Science(all)

Fingerprint

Dive into the research topics of 'Gradient Regularization with Multivariate Distribution of Previous Knowledge for Continual Learning'. Together they form a unique fingerprint.

Cite this