In recent years, various deep learning models have been developed for the fault diagnosis of rotating machines. However, in practical applications related to fault diagnosis, it is difficult to immediately implement a trained model because the distribution of source data and target domain data have different distributions. Additionally, collecting failure data for various operating conditions is time consuming and expensive. In this paper, we introduce a new transformation method for the latent space between domains using the source domain and normal data of the target domain that can be easily collected. Inspired by semantic transformations in an embedded space in the field of word embedding, discrepancies between the distribution of the source and target domains are minimized by transforming the latent representation space in which fault attributes are preserved. To match the feature area and distribution, spatial attention is applied to learn the latent feature spaces, and the 1D CNN LSTM architecture is implemented to maximize the intra-class classification. The proposed model was validated for two types of rotating machines such as a dataset of rolling bearings as CWRU and a gearbox dataset of heavy machinery. Experimental results show the proposed method has higher cross-domain diagnostic accuracy than others, therefore showing reliable generalization performance in rotating machines operating under various conditions.
Bibliographical noteFunding Information:
This work was partly supported by an IITP grant funded by the Korean government (MSIT) (No. 2020-0-01361, AI Graduate School Program (Yonsei University)) and a grant funded by Doosan Infracore, Inc.
© 2021 by the authors. Licensee MDPI, Basel, Switzerland.
All Science Journal Classification (ASJC) codes
- Analytical Chemistry
- Atomic and Molecular Physics, and Optics
- Electrical and Electronic Engineering