New AI System Enhances Fault Detection with Smarter Optimization Techniques
Table of Links
2. Preliminaries and 2.1. Blind deconvolution
2.2. Quadratic neural networks
3.1. Time domain quadratic convolutional filter
3.2. Superiority of cyclic features extraction by QCNN
3.3. Frequency domain linear filter with envelope spectrum objective function
3.4. Integral optimization with uncertainty-aware weighing scheme
4. Computational experiments
4.1. Experimental configurations
4.3. Case study 2: JNU dataset
4.4. Case study 3: HIT dataset
5. Computational experiments
5.2. Classification results on various noise conditions
5.3. Employing ClassBD to deep learning classifiers
5.4. Employing ClassBD to machine learning classifiers
5.5. Feature extraction ability of quadratic and conventional networks
5.6. Comparison of ClassBD filters
3.4. Integral optimization with uncertainty-aware weighing scheme
The fault-diagnosing task typically necessitates a deep learning classifier. Our module exhibits greater flexibility compared to other denoising models as it can be readily transferred to any 1D classifier, such as Transformer and CNN. Upon the addition of the classifier, the loss function evolves into a joint loss:
with 𝑝, 𝑞 being the true label and predicted label, respectively.
Our framework seamlessly integrates the objective functions of BD and downstream classifier, thereby incorporating the classification labels as prior information into the BD optimization. Compared to other forms of prior knowledge, such as cyclic frequency [23], classification labels are more readily obtainable without additional estimation, making them more suitable to guide BD in benefiting the downstream tasks.
In this context, optimizing ClassBD is framed as a multi-task learning problem [68]. In the context of multi-task learning, a key challenge lies in balancing different loss components. To address this problem, we employ the socalled uncertainty-aware weighing loss to automatically balance the importance of each loss component to the learning problem [69]. Assume that all the tasks have task-dependent or homoscedastic uncertainty, the loss functions for all tasks are subject to Gaussian noise, then the likelihood function can be defined as:
Consequently, the joint loss is formulated as:
Authors:
(1) Jing-Xiao Liao, Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hong Kong, Special Administrative Region of China and School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China;
(2) Chao He, School of Mechanical, Electronic and Control Engineering, Beijing Jiaotong University, Beijing, China;
(3) Jipu Li, Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hong Kong, Special Administrative Region of China;
(4) Jinwei Sun, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China;
(5) Shiping Zhang (Corresponding author), School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China;
(6) Xiaoge Zhang (Corresponding author), Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hong Kong, Special Administrative Region of China.
This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.