Variable learning rate EASIbased adaptive blind source separation in situation of nonstationary source and linear timevarying systems
Cheng Wang^{1} , Haiyang Huang^{2} , Yiwen Zhang^{3} , Yewang Chen^{4}
^{1, 2, 3, 4}College of Computer Science and Technology, Huaqiao University, 361021, Xiamen, P. R. China
^{1}State Key Laboratory for Strength and Vibration of Mechanical Structures, Xi’an Jiaotong University, 710049, Xi’an, P. R. China
^{1}Corresponding author
Journal of Vibroengineering, Vol. 21, Issue 3, 2019, p. 627638.
https://doi.org/10.21595/jve.2018.20007
Received 2 June 2018; received in revised form 4 December 2018; accepted 26 December 2018; published 15 May 2019
In the case of multiple nonstationary independent source signals and linear instantaneous timevarying mixing systems, it is difficult to adaptively separate the multiple source signals. Therefore, the adaptive blind source separation (BSS) problem is firstly formally expressed and compared with tradition BSS problem. Then, we propose an adaptive blind identification and separation method based on the variable learning rate equivariant adaptive source separation via independence (EASI) algorithm. Furthermore, we analyze the scope and conditions of variablelearning rate EASI algorithm. The adaptive BSS simulation results also show that the variable learning rate EASI algorithm provides better separation effect than the fixed learning rate EASI and recursive leastsquares algorithms.
 Online realtime BSS problems are formally expressed and compared to traditional BSS issues.
 This paper applies a variable learning rate EASI algorithm to the adaptive BSS problem.
 The similarity coefficient and Vestigial Quadratic Mismatch (VQM) are used as quantitative evaluation indicators of the waveform similarity of the separated source signals.
 A simulation is designed to verify the correctness of our approach. The fixed learning rate EASI and RLS algorithms are used for comparison.
Keywords: equivariant adaptive source separation via independence, variable learning rate, nonstationary, adaptive blind source separation, linear timevarying system.
1. Introduction
There is an increasing demand for dynamic systems to be safer and more reliable [1]. In mechanical fault diagnosis, the extraction of fault characteristic information is indispensable [2], although the complexity of mechanical devices means that mechanical vibration signals usually have mixed and multipath effects. Furthermore, there may be some frequency superposition in the signals, so it is challenging to extract feature information. When multiple signals are statistically independent, the technique of blind source separation (BSS) can identify the original signals through the use of sensors, regardless of the location information, signal quantity, and so on [3].
Recently, BSS has attracted considerable attention. It has been widely used in many practical applications, such as telecommunications [4], biomedical signal processing [5], vibration source enumeration and identification [6], image processing [7], voice signal classification [8], fault diagnosis [9], and structural health monitoring [10, 11]. The earliest application of BSS was in the failure analysis of a gearbox [12]. BSS can be conducted through two popular approaches, namely secondorder blind identification [13] and independent component analysis (ICA) [8]. Traditional BSS methods have two main disadvantages [14]: they operate offline, and require the entire signal to perform separation and identification. The use of offline BSS methods for online problems is timeconsuming and inefficient.
The first adaptive form of BSS was proposed by Jutten and Herault [15]. Cardoso and Laheld developed a different adaptive approach [16], using the ‘relative gradient’ adaptive algorithm based on serial updating, whereby the separating matrix is updated in each step when a new sample is received. These adaptive algorithms are known as equivariant adaptive source separation via independence (EASI). However, because of their constant learning rate (stepsize), typical EASI algorithms face a tradeoff between stability and convergence speed [17].
Methods with adaptive learning rate have been widely studied, as the variable learning rate makes the algorithm effective in nonstationary environments. Xie et al. [18] adjusted the learning rate using mutual information, and Zhang et al. [19] changed the learning rate by estimating a performance index. For timevarying mixing channels, the variable learning rate sign natural gradient algorithm was proposed by Yuan et al. [20]. Shifeng et al. [21] developed a variable leaning rate composed of two adaptive separation systems, while Gao et al. [22] designed a performanceindexbased EASI method for direct sequence code division multiple access (DSCDMA) systems. Chambers et al. presented a method for dealing with abrupt changes in the mixing matrix [23], and an adaptive algorithm that can separate noisy time varying mixtures has been reported by Enescu et al. [24]. DeYoung et al. [25] applied BSS techniques to mixtures of digital communication signals in which the sources are mobile or the environment is changing, and the mixing matrix will vary with time. Their results indicate that the main difficulty in the separation phase is the illconditioned nature of the channel matrix. Chen et al. [26] used a retrospective online EASI method to deal with the problem of sudden changes in the timevarying environment. A timevarying mixing matrix with stationary source signals was investigated by Bulek et al. [27]. In an earlier article, we proposed two algorithms (recursive leastsquares (RLS) and RecursiveEASI) [28] to solve the problem of timevarying source signals and timevarying systems simultaneously.
However, all of the above research work did not describe the adaptive BSS problem in detail. Furthermore, these previous methods focus on algorithm performance and do not evaluate the separation results. In this paper, we formally describe the adaptive BSS problem and propose the use of a variable learning rate EASIbased method to solve the problem of nonstationary source signals in a different slowly timevarying environment. Compared with our earlier algorithms [28], the variable learning rate EASI method described in this paper is totally different (REASI selects a reference point, and the method proposed in this paper uses a variable learning rate), and we present a detailed theoretical derivation. In the simulation section, we describe the parameter settings, evaluation index, and simulation analysis.
The primary contributions of this paper can be summarized as follows:
1) The problem of online realtime BSS under a timevarying system and nonstationary source signals is formally expressed and compared with the traditional BSS problem. The differences between the two problem types are explained in detail.
2) This paper applies a variable learning rate EASI algorithm for the problem of adaptive BSS for a mixing matrix that varies slowly with time and a nonstationary environment. In addition, the scope and conditions of the method are identified.
3) The similarity coefficient and Vestigial Quadratic Mismatch (VQM) are used as quantitative evaluation indexes of the waveform similarity of the separated source signals.
4) We design a simulation to verify the correctness of our method. The fixed learning rate EASI and RLS algorithms are used for comparison.
The remainder of this article is arranged as follows. Section 2 introduces the model and the concept of adaptive BSS. Section 3 describes the process of our EASI method in detail. Section 4 presents the simulation verification procedure and results. Finally, we state the conclusions to this study and ideas for future research in Section 5.
2. Adaptive BSS model for linear instantaneous mixing systems and nonstationary source signals
2.1. Description of adaptive BSS
In traditional BSS, the mixing matrix $\mathbf{A}$ is constant. However, in reality, the way in which signals mix varies with time. We call the linear instantaneous mixing matrix ${\mathbf{A}}_{t}$. The model can be described as [25, 27, 28]:
where $\mathbf{X}\left(t\right)=\left[{\overrightarrow{x}}_{1}\right(t),{\overrightarrow{x}}_{2}(t),...,{\overrightarrow{x}}_{m}(t){]}^{T}\in {R}^{m\times L}$ denotes the observed signals, ${\mathbf{A}}_{t}\in {R}^{m\times k}$, and $\mathbf{N}\left(t\right)\in {R}^{m\times L}$ denotes the observation noise. $\mathbf{S}\left(t\right)=\left[{\overrightarrow{s}}_{1}\right(t),{\overrightarrow{s}}_{2}(t),...,{\overrightarrow{s}}_{k}(t){]}^{T}\in {R}^{k\times L}$ represents the nonstationary source signals. Without considering any observation noise:
In our setting, the separation matrix ${\mathbf{B}}_{t}\in {R}^{k\times m}$ is unknown but should vary with time, and the separated multiple nonstationary independent signals $\mathit{Y}\left(t\right)=\left[{\overrightarrow{y}}_{1}\right(t),{\overrightarrow{y}}_{2}(t),...,{\overrightarrow{y}}_{k}(t){]}^{T}\in {R}^{k\times L}$ can be got by:
and the separated signals latest value $\overrightarrow{y}\left(t\right)={\mathbf{B}}_{t}\overrightarrow{x}\left(t\right)={\mathbf{B}}_{t}{\mathbf{A}}_{t}\overrightarrow{s}\left(t\right)={\mathbf{E}}_{t}\overrightarrow{s}\left(t\right)\in {R}^{k\times 1}$. We hope that the $\mathbf{Y}\left(\mathit{t}\right)$ and $\mathbf{S}\left(t\right)$ are as similar as possible. Fig. 1 illustrates the adaptive BSS model for linear instantaneous mixing systems and nonstationary source signals.
The separating matrix ${\mathbf{B}}_{t}$ is constructed using either a onestage or twostage separation system [16, 29]. In the onestage method, ${\mathbf{B}}_{t}$ is obtained directly by minimizing/maximizing some contrast function. In this paper, we focus on the twostage approach, in which the observations are first preprocessed by an $m$×$m$ whitening matrix ${\mathbf{V}}_{t}$, and then an orthogonal matrix ${\mathbf{U}}_{t}\in {R}^{k\times m}$ is used to separate the source signals. Finally, we obtain the total separating matrix ${\mathbf{B}}_{t}={\mathbf{U}}_{t}{\mathbf{V}}_{t}$. Fig. 2 illustrates this process.
Fig. 1. Adaptive BSS model for linear instantaneous timevarying mixing systems and nonstationary source signals
Fig. 2. Twostage separation adaptive BSS
2.2. Model assumptions of adaptive BSS
BSS problems have multiple solutions [28]. To identify the optimal solution, five basic assumptions are proposed, which are slightly different from those used in traditional BSS [14]:
1) ${\mathbf{A}}_{t}\in {R}^{m\times k}$ satisfies $rank\left({\mathbf{A}}_{t}\right)=k$.
2) $\overrightarrow{s}\left(t\right)$ is a nonstationary random process.
3) $\overrightarrow{s}\left(t\right)$ is statistically independent at each time, and at most one component in $\mathbf{S}\left(t\right)$ obeys a Gaussian distribution.
4) The number of observed signals $\mathbf{X}\left(t\right)$ is greater than or equal to the number of source signals $\mathbf{S}\left(t\right)$.
5) $\mathbf{S}\left(t\right)$ is linearly mixed and the system is slowly timevarying.
2.3. Comparison of adaptive BSS and traditional BSS
Table 1 summarizes the differences between adaptive BSS and traditional BSS in terms of the source signals and mixing matrix.
Table 1. The differences between the adaptive BSS and traditional BSS
Problem

Source signal

Mixing matrix

Processing requirement

Tradition BSS

Stationary

Time invariant

Offline and batch processing

Adaptive BSS

Nonstationary

Timevarying

Online and real time

3. Theoretical derivation of linear timevarying mixed BSS using EASI
3.1. Notion of equivariance
The EASI algorithm was first proposed by Cardoso, who used the notion of equivariance to prove that the performance of EASI is independent of the mixing matrix [29] when the mixing matrix is constant. In the same way, a blind estimate of ${\mathbf{A}}_{t}$ is simply a function of $\mathbf{X}\left(t\right)$. This can be expressed as:
where ${\widehat{\mathbf{A}}}_{t}$ is the estimator of ${\mathbf{A}}_{t}$ and $\mathbf{F}(\cdot )$ represents this functional relationship. In equivariance theory, when the data conversion is equivalent to some parameter conversion, both ${\widehat{\mathbf{A}}}_{t}$ and $\mathbf{X}\left(t\right)$ are multiplied on the left by a matrix $\mathbf{M}\in {R}^{k\times k}$. In this case, ${\widehat{\mathbf{A}}}_{t}$ is equivariant when it satisfies:
Eq. (6) shows that $\widehat{\mathbf{S}}\left(t\right)$ is given by $\widehat{\mathbf{S}}\left(t\right)=\mathbf{F}\left(\mathbf{S}\right(t\left){)}^{1}\mathbf{S}\right(t)$, which means that it is only related to $\mathbf{S}\left(t\right)$ and is independent of ${\mathbf{A}}_{t}$: this is also called uniform performance.
3.2. Serial matrix updating
The core concept of EASI is serial updating [16]. Serial updating involves choosing a $k$×$k$ matrixvalued function $y\to H\left(y\right)$, which is used to update the separating matrix ${\mathbf{B}}_{t+1}$ according to:
where ${\lambda}_{t}$ is the learning rate and is fixed in traditional EASI algorithm. Fig. 3 illustrates this update process of separation matrix. The global system ${\mathbf{E}}_{t+1}$ can be similarly updated by:
Eq. (8) also shows that the overall system ${\mathbf{E}}_{t}$ is not dependent on ${\mathbf{A}}_{t}$.
Fig. 3. Serial update of a matrix
3.3. EASI algorithm
Using the notion of ‘relative gradient’ [16], the function $H\left(y\right)$ in Eq. (7) can be considered as:
In Eq. (9), $f$ is an arbitrary differentiable function. Therefore, ${\mathbf{B}}_{t+1}$ can be updated according to:
Let $\mathbf{I}\in {R}^{k\times k}$ be the unit matrix. Exactly as in Eq. (10), according to [16], the serial whitening matrix ${\mathbf{V}}_{t+1}$ can be updated by:
where $\overrightarrow{y}\left(t\right)={\mathbf{U}}_{t}\overrightarrow{z}\left(t\right)$ and $\overrightarrow{z}\left(t\right)={\mathbf{V}}_{t}\overrightarrow{x}\left(t\right)$ is the signal after whitening. The orthogonal matrix ${\mathbf{U}}_{t+1}$ can also be updated by:
The purpose of Eq. (11) and Eq. (12) is to obtain the updated ${\mathbf{B}}_{t+1}$. Then, according to Eq. (11), Eq. (12), and ${\mathbf{B}}_{t}={\mathbf{U}}_{t}{\mathbf{V}}_{t}$, we have:
$\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathrm{}\mathbf{I}+O\left({\lambda}_{t}^{2}\right)){\mathbf{B}}_{t}\approx {\mathbf{B}}_{t}{\lambda}_{t}({f}^{\mathrm{\text{'}}}\left(\overrightarrow{y}\left(t\right)\right){\overrightarrow{y}}^{T}\left(t\right)\overrightarrow{y}\left(t\right){f}^{\mathrm{\text{'}}}\left(\overrightarrow{y}\left(t\right){)}^{T}+\overrightarrow{y}\left(t\right){\overrightarrow{y}}^{T}\left(t\right)\mathbf{I}\right){\mathbf{B}}_{t}.$
In this equation, $E\left(f\mathrm{\text{'}}\right(\overrightarrow{y}\left(t\right)\left){\overrightarrow{y}}^{T}\right(t)\overrightarrow{y}(t\left)f\mathrm{\text{'}}\right(\overrightarrow{y}\left(t\right){)}^{T}+\overrightarrow{y}\left(t\right){\overrightarrow{y}}^{T}\left(t\right)\mathbf{I})=$ 0, and so for $i\ne j$, $\overrightarrow{y}\left(i\right)$ and $\overrightarrow{y}\left(j\right)$ are independent of each other. For $k$ arbitrary nonlinear functions $g\left(y\right)$, we define:
The EASI algorithm for adaptive source separation is based on Eq. (7) and Eq. (15):
To maintain uniform performance, which means that ${\mathbf{B}}_{t}$ can take any values, we adjust $H\left(\overrightarrow{y}\right(t\left)\right)$ to preserve stability. Eq. (17) describes the normalized form [16]:
3.4. Advantages of EASI algorithm
EASI is a gradientbased algorithm that uses nonlinear decorrelation to estimate the independent components. It has several advantages over other ICA algorithms [30].
1) EASI is an adaptive and online algorithm, which makes it suitable for problems in which the underlying distributions of input characteristics vary over time.
2) EASI is equivariant, and the convergence speeds, interference suppression levels, and other properties are only related to the signals’ normalized distributions and are independent of the mixing matrix.
3) EASI offers improved parallelism by combining whitening with separation, because other methods whiten the input features during a preprocessing step.
4) EASI is computationally efficient, as its basic operations only require addition and multiplication.
3.5. Variable learning rate EASI algorithm
In the EASI algorithm, the learning rate is closely related to the convergence speed and steadystate error. When the learning rate is fixed, values that are too large will make the algorithm unstable, whereas values that are too small will increase the convergence time [17].
Given the limitations of a fixed learning rate, we use a typical timedescending learning rate given by [31, 32]:
where ${\lambda}_{0}$, ${t}_{0}$, and ${t}_{d}$ are constants.
3.6. Scope and conditions of variable learning rate EASI algorithm
1) The sources are statistically independent.
2) The source signals are nonstationary and the mixing matrix represents slow instantaneous linear mixing.
3) The number of observed signals is greater than or equal to the number of source signals, and the number of signals cannot be changed.
4. Simulation verification
4.1. Simulation dataset and parameter settings
We design simulations to demonstrate the performance of our variable learning rate EASIbased adaptive BSS method. We choose three different signals, a nonstationary wave ${\overrightarrow{s}}_{1}\left(t\right)$, square wave ${\overrightarrow{s}}_{2}\left(t\right)$, and a triangular wave ${\overrightarrow{s}}_{3}\left(t\right)$. These are specified as follows:
${\overrightarrow{s}}_{1}\left(t\right)$, ${\overrightarrow{s}}_{2}\left(t\right)$, and ${\overrightarrow{s}}_{3}\left(t\right)$ are independent of each other. For ${\overrightarrow{s}}_{1}\left(t\right)$, which is not periodic, we calculate the expectation at intervals [1, 400 s] and [201, 600 s] to be 0.0118 and 0.0082 and the mean square root to be 1.4982 and 1.0013, respectively. The expectation and mean square root change with time. Therefore, ${\overrightarrow{s}}_{1}\left(t\right)$ has the property of nonstationarity. The linear instantaneous mixing matrix ${\mathbf{A}}_{t}\in {R}^{3\times 3}$ is slowly timevarying. We use MATLAB’s “rand” function to randomly generate ${\mathbf{A}}_{t}$ as follows:
Then, we allow the first element of ${\mathbf{A}}_{t}$ to vary slowly:
Therefore, we have designed nonstationary sources and a slowly timevarying mixing matrix. The nonlinear functions $\overrightarrow{g}\left(\overrightarrow{y}\right(t\left)\right)$ are related to the signal distributions. When $\overrightarrow{y}\left(t\right)$ is a supGaussian signal, the choice may be $\overrightarrow{g}\left(\overrightarrow{y}\right(t\left)\right)=\overrightarrow{y}(t{)}^{3}$. When $\overrightarrow{y}\left(t\right)$ is a superGaussian signal, our choice is usually $\overrightarrow{g}\left(\overrightarrow{y}\right(t\left)\right)=\mathrm{t}\mathrm{a}\mathrm{n}\mathrm{h}\left(\overrightarrow{y}\right(t\left)\right)$ [17]. In this paper, $\overrightarrow{g}\left(\overrightarrow{y}\right(t\left)\right)=\overrightarrow{y}(t{)}^{3}$. The sampling length $L=$ 2000 s.
For comparison, we designed a fixed learning rate EASI, RLS method, and variable learning rate EASI. For the fixed learning rate algorithm, after a lot of experiments, we set the EASI learning rate to 0.0017 and the RLS learning rate to 0.982. For the variable learning rate parameters, we set ${\lambda}_{0}=$ 0.014, ${t}_{0}=$ 200, and ${t}_{d}=$ 0.025.
4.2. Source separation evaluation index
4.2.1. Similarity coefficient
We wish to consider the similarity, and completely eliminate the uncertainty of the order and amplitude of the output components. Most existing algorithms use the performance index (PI) as an evaluation metric, representing the closeness between ${\mathbf{B}}_{t}^{1}$ and the mixing matrix ${\mathbf{A}}_{t}$ [33]. The similarity coefficient between the $i$th separated signal source ${\overrightarrow{y}}_{i}\left(t\right)$ after normalization and the $j$th real random fault source ${\overrightarrow{s}}_{j}\left(t\right)$ is given by:
when the only difference in ${\overrightarrow{y}}_{i}\left(t\right)$ and ${\overrightarrow{s}}_{j}\left(t\right)$ is in their amplitudes, ${\varsigma}_{i,j}=$ 1. When ${\overrightarrow{y}}_{i}\left(t\right)$ is independent of ${\overrightarrow{s}}_{j}\left(t\right)$, ${\varsigma}_{i,j}=$ 0. Therefore, we expect to achieve values close to 1. The similarity coefficient between the $i$th separated signal source ${\overrightarrow{y}}_{i}\left(t\right)$ and the $j$th separated signal source ${\overrightarrow{y}}_{j}\left(t\right)$ after normalization is defined as:
when ${\overrightarrow{y}}_{i}\left(t\right)$ is independent of ${\overrightarrow{y}}_{j}\left(t\right)$, ${\xi}_{i,j}=$ 0. We therefore hope to achieve values close to 0.
4.2.2. Vestigial quadratic mismatch
The Vestigial Quadratic Mismatch (VQM) between the separated signals and the real random fault source can be used as a performance index [34]. This metric is calculated as:
where:
when the value of VQM is less than −23 dB, the effect of adaptive BSS can be considered to be very good.
4.3. Simulation results
The source signals used in this simulation are shown in Fig. 4. From top to bottom, they are a nonstationary wave ${\overrightarrow{s}}_{1}\left(t\right)$, a square wave ${\overrightarrow{s}}_{2}\left(t\right)$, and a triangular wave ${\overrightarrow{s}}_{3}\left(t\right)$. After linear instantaneous mixing, the observed signals are shown in Fig. 5. Figs. 69 show the separation results of the fixed learning rate EASI, the fixed learning rate RLS, and the variable learning rate EASI method.
Fig. 4. Three different nonstationary source signals
Fig. 5. Signals after linear mixing
Fig. 6. Estimated signals output by the fixed learning rate EASI algorithm
Fig. 7. Estimated signals output by the fixed learning rate RLS algorithm
Fig. 8. Estimated signals output by the variable learning rate EASI algorithm
Table 2 presents the separation results achieved by EASI, RLS, and variable learning rate EASI. Table 3 demonstrates the independence of the source signals. Table 4 lists the similarity between the separated signals given by EASI, RLS, and variable learning rate EASI.
Fig. 9 shows the variation of the first element of the mixing matrix over time.
Table 2. Separation results of three methods
Method

Corresponding relationship

Similarity coefficient

VQM

EASI

${\overrightarrow{y}}_{1}\left(t\right)$ responding to ${\overrightarrow{s}}_{1}\left(t\right)$

0.8233

–3.2300

${\overrightarrow{y}}_{2}\left(t\right)$ responding to ${\overrightarrow{s}}_{3}\left(t\right)$

0.9446

–9.0979


${\overrightarrow{y}}_{3}\left(t\right)$ responding to ${\overrightarrow{s}}_{2}\left(t\right)$

0.9799

–13.8283


RLS

${\overrightarrow{y}}_{1}\left(t\right)$ responding to ${\overrightarrow{s}}_{1}\left(t\right)$

0.8066

–2.6945

${\overrightarrow{y}}_{2}\left(t\right)$ responding to ${\overrightarrow{s}}_{3}\left(t\right)$

0.9446

–9.0652


${\overrightarrow{y}}_{3}\left(t\right)$ responding to ${\overrightarrow{s}}_{2}\left(t\right)$

0.9863

–15.5217


Variable learning rate EASI

${\overrightarrow{y}}_{1}\left(t\right)$ responding to ${\overrightarrow{s}}_{1}\left(t\right)$

0.9575

–10.4256

${\overrightarrow{y}}_{2}\left(t\right)$ responding to ${\overrightarrow{s}}_{3}\left(t\right)$

0.9845

–14.8274


${\overrightarrow{y}}_{3}\left(t\right)$ responding to ${\overrightarrow{s}}_{2}\left(t\right)$

0.9911

–21.5032

Table 3. Similarity and independence between source signals
Corresponding relation between source signals

Similarity coefficient between source signals

$P$value of Chisquare test

${\overrightarrow{s}}_{1}\left(t\right)$ responding to ${\overrightarrow{s}}_{2}\left(t\right)$

0.0017

0.4895

${\overrightarrow{s}}_{1}\left(t\right)$ responding to ${\overrightarrow{s}}_{3}\left(t\right)$

–0.0027

0.4012

${\overrightarrow{s}}_{2}\left(t\right)$ responding to ${\overrightarrow{s}}_{3}\left(t\right)$

0.0013

1

Table 4. Similarity between separation signals
Method

Corresponding relationship

Similarity coefficient

EASI

${\overrightarrow{y}}_{1}\left(t\right)$ responding to ${\overrightarrow{y}}_{2}\left(t\right)$

0.2383

${\overrightarrow{y}}_{1}\left(t\right)$ responding to ${\overrightarrow{y}}_{3}\left(t\right)$

0.3525


${\overrightarrow{y}}_{2}\left(t\right)$ responding to ${\overrightarrow{y}}_{3}\left(t\right)$

0.1662


RLS

${\overrightarrow{y}}_{1}\left(t\right)$ responding to ${\overrightarrow{y}}_{2}\left(t\right)$

0.0982

${\overrightarrow{y}}_{1}\left(t\right)$ responding to ${\overrightarrow{y}}_{3}\left(t\right)$

0.2144


${\overrightarrow{y}}_{2}\left(t\right)$ responding to ${\overrightarrow{y}}_{3}\left(t\right)$

0.1557


Variable learning rate EASI

${\overrightarrow{y}}_{1}\left(t\right)$ responding to ${\overrightarrow{y}}_{2}\left(t\right)$

0.0531

${\overrightarrow{y}}_{1}\left(t\right)$ responding to ${\overrightarrow{y}}_{3}\left(t\right)$

0.0172


${\overrightarrow{y}}_{2}\left(t\right)$ responding to ${\overrightarrow{y}}_{3}\left(t\right)$

0.0447

Fig. 9. The variation of ${\mathbf{A}}_{1}\left(\mathrm{1,1}\right)$
4.4. Simulation results analysis
The simulation results can be analyzed as follows:
1) The first separation signal in Fig. 6 corresponds to the first source signal in Fig. 4, the second separation signal ${\overrightarrow{y}}_{2}\left(t\right)$ corresponds to the third source signal ${\overrightarrow{s}}_{3}\left(t\right)$, and ${\overrightarrow{y}}_{3}\left(t\right)$ corresponds to ${\overrightarrow{s}}_{2}\left(t\right)$. The sequence of the separated signals is uncertain, which is a problem with the adaptive BSS algorithm.
2) From Fig. 4, we can see that ${\overrightarrow{s}}_{1}\left(t\right)$ is nonstationary. In Table 3, if the $p$value is close to 0, the source signals are not independent. Therefore, the sources are nonstationary and independent, and can be used to verity the algorithm.
3) The shapes of the identified signals are similar to those of the source signals (see Fig. 4 and Figs. 68) in the different methods. This result is confirmed by Table 2. The fixed learning rate EASI and RLS are not as good as variable learning rate EASI in the case of nonstationary environments and timevarying mixing.
4) In Table 4, the similarity coefficient between separated signals is less than 6 % in variable learning rate EASI, demonstrating that the correlation between the separated signals is not high and the separation is very good.
5) Fig. 9 illustrates the exponential increase of ${\mathbf{A}}_{1}\left(\mathrm{1,1}\right)$. Though ${\mathbf{A}}_{1}$ was chosen at random, the selection of the initial value ${\mathbf{A}}_{1}\left(\mathrm{1,1}\right)$ has a significant influence on the simulation results. The reason for this requires further study.
6) In Tables 2 and 4, the variable learning rate has a significant impact on the results. The variable learning rate EASI achieves better results than the other algorithms, and RLS outperforms EASI. The separation signals obtained by all methods are similar to the source signals, and the correlation between the separated signals is not high.
7) Similarity coefficients and VQM can both be used to evaluate the separation results. In Table 2, when the similarity coefficients are the same, VQM can still distinguish the results.
5. Conclusions
In this paper, we have described a variable learning rate EASI algorithm for adaptive BSS with a nonstationary source and slowly timevarying environment. This algorithm achieves good accuracy in terms of source separation.
However, the accuracy of the algorithm is closely related to the choice of parameters. The current learning rate and initial value are selected in advance based on experience, rather than using the degree of nonstationarity of the system response signal. In future work, we will identify more complex timevarying situations, such as changing the number of sources or the mixing matrix dimensions, explore the effects of parameter values on the results, and present experimental verifications of actual scenarios.
Acknowledgements
This work was supported by the National Natural Science Foundation of China (Grant Nos. 51305142, 51305143), the General Financial Grant from the China Postdoctoral Science Foundation (Grant No. 2014M552429) and the Project of Young Teacher Education Research of Education Department of Fujian Province of China (Grant No. JAT170038). We thank Stuart Jenkinson, Ph.D., from Liwen Bianji, Edanz Group China (www.liwenbianji.cn/ac), for editing the English text of a draft of this manuscript.
References
 Chen J., Patton R. J. Robust ModelBased Fault Diagnosis for Dynamic Systems. Springer Science and Business Media, 2012. [Search CrossRef]
 Wang H., Ji X., Wang X., et al. Fault feature extraction of fan bearing based on improved mathematical morphological unsampled wavelet. Chinese Automation Congress, 2017, p. 31883192. [Search CrossRef]
 Li Z., Yan X., Tian Z., Yuan C., Peng Z., Li L. Blind vibration component separation and nonlinear feature extraction applied to the nonstationary vibration signals for the gearbox multifault diagnosis. Measurement, Vol. 46, Issue 1, 2013, p. 259271. [Publisher]
 Sayoud A., Djendi M., Medahi S., et al. A dual fast NLMS adaptive filtering algorithm for blind speech quality enhancement. Applied Acoustics, Vol. 135, 2018, p. 101110. [Publisher]
 Takada H., Ogawa T., Matsumoto H. Blind signal separation for heart sound and lung sound from auscultatory sound based on the high order statistics. International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS), 2017, p. 201205. [Publisher]
 Cheng W., He Z. J., Zhang Z. S. A comprehensive study of vibration signals for a thin shell structure using enhanced independent component analysis and experimental validation. Journal of Vibration and Acoustics, Vol. 136, Issue 4, 2014, p. 041011. [Publisher]
 Hasan A. M., Melli A., Wahid K. A., et al. Denoising lowdose CT images using multiframe blind source separation and block matching filter. IEEE Transactions on Radiation and Plasma Medical Sciences, Vol. 2, Issue 4, 2018, p. 279287. [Publisher]
 Katoozian D., Faradji F. Singer’s voice elimination from stereophonic pop music using ICA. 3rd Iranian Conference on Intelligent Systems and Signal Processing (ICSPIS), 2017, p. 174177. [Search CrossRef]
 Cheng W., Jianying W., Bineng Z., et al. Negentropy and gradient iteration based fast independent component analysis for multiple random fault sources blind identification and separation. International Journal of Applied Electromagnetics and Mechanics, Vol. 52, Issues 12, 2016, p. 711719. [Publisher]
 Qiu L., Liu B., Yuan S., et al. Impact imaging of aircraft composite structure based on a modelindependent spatialwavenumber filter. Ultrasonics, Vol. 64, 2016, p. 1024. [Publisher]
 Zhong Y., Yuan S., Qiu L. Multiimpact source localisation on aircraft composite structure using uniform linear PZT sensors array. Structure and Infrastructure Engineering, Vol. 11, Issue 3, 2015, p. 310320. [Publisher]
 Alexander Y. Learning methods for mechanical vibration analysis and health monitoring. Ph.D. Thesis, Delft University of Technology, 1998. [Search CrossRef]
 Popescu T. D., Manolescu M. Blind source separationa tool for multivariate time series forecasting. The 9th IEEE International Conference on Modelling, Identification and Control (ICMIC). 2017, p. 1012. [Search CrossRef]
 Amini F., Ghasemi V. Adaptive modal identification of structures with equivariant adaptive separation via independence approach. Journal of Sound and Vibration, Vol. 413, 2018, p. 6678. [Publisher]
 Jutten C., Herault J. Blind separation of sources, part I: an adaptive algorithm based on neuromimetic architecture. Signal Process, Vol. 24, Issue 1, 1991, p. 110. [Publisher]
 Cardoso J. F., Donoho D. L. Equivariant adaptive source separation. IEEE Transactions on Signal Processing, Vol. 44, Issue 12, 1996, p. 30173030. [Publisher]
 Xu J., Shen Y., Su Q., et al. A fast online separation algorithm for convolutive mixture model in WSDM. 5th International Conference on Computer Science and Network Technology (ICCSNT), 2016, p. 720724. [Search CrossRef]
 Xie X., Shi Q., Wu R. A new variable stepsize equivariant adaptive source separation algorithm. AsiaPacific Conference on Communications, 2007, p. 479482. [Search CrossRef]
 Zhang T., Li L., Zhang G., et al. Use estimation of Performance Index to realize adaptive blind source separation. 4th International Congress on Image and Signal Processing (CISP), Vol. 5, 2011, p. 23222326. [Search CrossRef]
 Yuan L., Wang W., Chambers J. A. Variable stepsize sign natural gradient algorithm for sequential blind source separation. IEEE Signal Processing Letters, Vol. 12, Issue 8, 2005, p. 589592. [Publisher]
 Shifeng O., Ying G., Gang J., et al. Variable step size algorithm for blind source separation using a combination of two adaptive separation systems. 5th International Conference on Natural Computation, Vol. 3, 2009, p. 649652. [Search CrossRef]
 Gao L., Zhang T., He D., et al. A variable stepsize EASI algorithm based on PI for DSCDMA system blind estimation. 5th International Congress on Image and Signal Processing (CISP), 2012. [Search CrossRef]
 Chambers J. A., Jafari M. G., McLaughlin S. Variable stepsize EASI algorithm for sequential blind source separation. Electronics Letters, Vol. 40, Issue 6, 2004, p. 393394. [Publisher]
 Enescu M., Koivunen V. Tracking timevarying mixing system in blind separation. Proceedings of the 2000 IEEE Sensor Array and Multichannel Signal Processing Workshop. SAM 2000 (Cat. No.00EX410), 2000. [Search CrossRef]
 Deyoung M. R., Evans B. L. Blind source separation with a timevarying mixing matrix. Conference Record of the FortyFirst Asilomar Conference on Signals, Systems and Computers, 2007. [Publisher]
 Chen H. P., Zhang H., Zhang J. Retrospective online EASI blind source separation algorithm. Journal of Signal Processing, Vol. 4, 2013, p. 2431. [Search CrossRef]
 Bulek S., Erdol N. Block adaptive ICA with a time varying mixing matrix. Digital Signal Processing Workshop and 5th IEEE Signal Processing Education Workshop, 2009. [Publisher]
 Wang C., Hu Y., Zhan W., et al. Multiple random fault sources adaptive blind separation in situation of timevarying source signals and system. Vibroengineering Procedia, Vol. 14, 2017, p. 8286. [Publisher]
 Zhu X. L., Zhang X. D. Adaptive RLS algorithm for blind source separation using a natural gradient. IEEE Signal Processing Letters, Vol. 9, Issue 12, 2002, p. 432435. [Publisher]
 Nazemi M., Nazarian S., Pedram M. Highperformance FPGA implementation of equivariant adaptive separation via independence algorithm for independent component analysis. IEEE 28th International Conference on Applicationspecific Systems, Architectures and Processors (ASAP), 2017. [Search CrossRef]
 Zhu X., Zhang X., Ye J. Natural gradientbased recursive leastsquares algorithm for adaptive blind source separation. Science in China, Vol. 47, Issue 1, 2004, p. 5565. [Search CrossRef]
 Yang H. H. Serial updating rule for blind separation derived from the method of scoring. IEEE Transactions on Signal Processing, Vol. 47, Issue 8, 1999, p. 22792285. [Publisher]
 Cichocki A., Orsier B., Back A., et al. Online adaptive algorithms in nonstationary environments using a modified conjugate gradient approach. Neural Networks for Signal Processing VII, Proceedings of the 1997 IEEE Signal Processing Society Workshop, 1997, p. 316325. [Publisher]
 Gelle G., Colas M., Delaunay G. Blind sources separation applied to rotating machines monitoring by acoustical and vibrations analysis. Mechanical Systems and Signal Processing, Vol. 14, Issue 3, 2000, p. 427442. [Publisher]
Cited By
International Journal of Rotating Machinery
Feng Miao, Rongzhen Zhao, Leilei Jia, Xianli Wang, Paolo Pennacchi

2021

Neural Processing Letters
Di Zhao, Kunchang Li, Hongyi Li

2021

Measurement Science and Technology
Jiantao Lu, Wei Cheng, Yanyang Zi

2020

Mathematical Problems in Engineering
Feng Miao, Rongzhen Zhao

2020
