Anomaly detection is essential for many real-world applications, such as video surveillance, disease diagnosis, and visual inspection. With the development of neural networks, many neural networks have been used for anomaly detection by learning the distribution of normal data. However, they are vulnerable to distinguishing abnormalities when the normal and abnormal images are not significantly different. To mitigate this problem, we propose a novel loss function for one-class anomaly detection: decentralization loss. The main goal of the proposed method is to cause the latent feature of the encoder to disperse over the manifold space, such that the decoder can generate images similar to those in a normal class for any input. To this end, a decentralization term designed based on the dispersion measure for latent vectors is also added to the existing mean-squared error loss. To design a general solution for various datasets, we restrict the latent space by designing a decentralization loss term-based upper bound of the dispersion measure. As intended, a model trained with the proposed decentralization loss function disperses vectors on the manifold space and generates constant images. Consequently, the reconstruction error increases when the given test image is unknown. Experiments conducted on various datasets verify that the proposed function improves detection performance improved by about 1 % while reducing training time by 48 %, without any structural changes in the conventional autoencoder.
Bibliographical noteFunding Information:
This work was supported by the Cross-Ministry ‘Giga Korea Project’ grant from the Ministry of Science, ICT and Future Planning, South Korea [Development and demonstration of 5G-based fashion manufacturing convergence service] under Grant GK19P1500.
© 2020 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
All Science Journal Classification (ASJC) codes
- Computer Science(all)
- Materials Science(all)