Non-orthogonal joint block diagonalization based on the LU or QR factorizations for convolutive blind source separation

Lei Zhang1 , Yueyun Cao2 , Zichun Yang3 , Lei Weng4

1, 2, 3, 4Power Engineering college, Naval University of Engineering, WuHan 430033, P. R. China

1Corresponding author

Journal of Vibroengineering, Vol. 19, Issue 5, 2017, p. 3380-3394. https://doi.org/10.21595/jve.2017.18039
Received 20 November 2016; received in revised form 18 January 2017; accepted 8 February 2017; published 15 August 2017

Copyright © 2017 JVE International Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Creative Commons License
Table of Contents Download PDF Acknowledgements References
Cite this article
Views 64
Reads 19
Downloads 872
WoS Core Citations 0
CrossRef Citations 0
Abstract.

This article addresses the problem of blind source separation, in which the source signals are most often of the convolutive mixtures, and moreover, the source signals cannot satisfy independent identical distribution generally. One kind of prevailing and representative approaches for overcoming these difficulties is joint block diagonalization (JBD) method. To improve present JBD methods, we present a class of simple Jacobi-type JBD algorithms based on the LU or QR factorizations. Using Jacobi-type matrices we can replace high dimensional minimization problems with a sequence of simple one-dimensional problems. The novel methods are more general i.e. the orthogonal, positive definite or symmetric matrices and a preliminary whitening stage is no more compulsorily required, and further, the convergence is also guaranteed. The performance of the proposed algorithms, compared with the existing state-of-the-art JBD algorithms, is evaluated with computer simulations and vibration experimental. The results of numerical examples demonstrate that the robustness and effectiveness of the two novel algorithms provide a significant improvement i.e., yield less convergence time, higher precision of convergence, better success rate of block diagonalization. And the proposed algorithms are effective in separating the vibration signals of convolutive mixtures.

Keywords: joint block diagonalization (JBD), convolutive blind source separation (CBSS), non-orthogonal matrix, convergence, LU and QR.

1. Introduction

Blind source separation (BSS) deals with the problem of finding both the unknown input sources and unknown mixing system from only observed output mixtures. BSS has recently become the focus of intensive research work due to its high potential in many applications such as antenna processing, speech processing and pattern recognition [1-3]. The recent successes of BSS might be also used in mechanical engineering [4-7]. In these reported applications for tackling the BSS, there were two kinds of BSS models i.e., instantaneous and convolutive mixture models. The instantaneous mixture models with a simple structure have been described in many papers and books [8-10]. However, when it came to deal with convolutive mixture signals, BSS might face a number of difficulties which seriously hindered its feasibility [11, 12]. There is currently an endeavor of research in separating convolutive mixture signals, yet no fully satisfying algorithms have been proposed so far.

Many approaches have been proposed to solve the convolutive BSS (CBSS) problem in recent years. One kind of prevailing and representative approaches is joint block diagonalization (JBD) which can produce potentially more elegant solution for CBSS in time domain [13]. In the article, we focus on the JBD problem, which has been firstly treated in [14] for a set of positive definite symmetric matrices. And then the same conditions also mentioned in [15]. To solve the JBD problem, Belouchrani have sketched several Jacobi strategies in [16-18]: the JBD problem was turn into a minimization problem which was processed by iterative methods; as a product of Givens rotations, each rotation made a block-diagonality criterion minimum around a fixed axis. Févotte and Theis [19, 20] have pointed out that the behavior of Jacobi approach was very much dependent on the initialization of the orthogonal basis and also on the choice of the successive rotations. Then they proposed some strategies to improve the efficiency of JBD. But there are also several critical constraints: the joint block-diagonalizer is an orthogonal (unitary in the complex case) matrix; the spatial pre-whitening is likely to lead to a larger error and, moreover, this error is unable to correct in the subsequent analysis. In [21] a gradient-based JBD algorithm have been used to achieve the same task but for non-unitary joint block-diagonalizers. This approach suffers from slow convergence rate since the iteration criteria possesses a fixed step size. To improve this shortcoming, some gradient-based JBD algorithms with optimal step size have been provided and studied in [22-24]. However, these algorithms are apt to converge to a local minimum and have low computational efficiency. To eliminate the degenerate solutions of the nonunitary JBD algorithm, Zhang [25] optimized a penalty term based weighted least-squares criterion. In [26], a novel tri-quadratic cost function was introduced, furthermore, an efficient algebraic method based on triple iterations has been used to search the minimum point of the cost function. Unfortunately, this method exists redundancy value and arises error when the mixture matrix is inversed. Some new Jacobi-like algorithms [27, 28] for non-orthogonal joint diagonalization have been proposed, but unfortunately cannot be used to solve the problem of block diagonalization.

Our purpose, here, is not only to tackle the problem of the approximate JBD by discard the orthogonal constraint on the joint block-diagonalizer i.e., impose as few assumptions as possible on the matrix set, but also to propose the JBD algorithms characterized by the merits of simplicity, effectiveness, and computational efficiency. Subsequently, we suggest two novels non-orthogonal JBD algorithms as well as the Jacobi-type schemes. The new methods are an extension of joint diagonalization (JD) algorithms [29] based on LU and QR decompositions mentioned in [30, 31] to the block diagonal case.

This article is organized as follows: the JBD problem is stated in Section 2.1. The two proposed algorithms are derived in section 2.2, whose convergence is proved in Section 2.3. In Section 3, we show how to apply the JBD algorithms to tackle the CBSS problem. Section 4 gives the results of numerical simulation by comparing the proposed algorithms with the state-of-the-art gradient-based algorithms introduced in [23]. In Section 5, the novel JBD algorithms are proved to be effective in separating vibration source of convolutive type, which outperforms JBDOG and JRJBD algorithms [19].

2. Problem formulation and non-orthogonal JBD algorithms

2.1. The joint block diagonalization problem

Let Rkk=1KCM×M be a set of K matrices (the matrices are square but not necessarily Hermitian or positive definite), that can be approximately diagonalized as:

(1)
R k = A D k A H + N k ,

where ACM×M is a general mixing matrix and the matrix NkCM×M denotes residual noise, H denotes complex conjugate transpose (replace it by T in real domain). Dk(k=1,,K) is an M×M block diagonal matrix where the diagonal blocks are square matrices of any size and the off-diagonal blocks are zeros matrices i.e.:

(2)
D k = D k , 11 0 1 r 0 21 0 2 r 0 r 1 D k , r r ,

where Dk,ii denotes the ith diagonal block of the size ni×nj such that n1+n2++nr=M. In general case, e.g. in the context of CBSS, the diagonal blocks are assumed to be the same size, i.e. M=ni×r and where 0ij denotes the ni×nj null matrix. The JBD problem consists of the estimation of A and Dk (k=1,,K) when the matrices Rkk=1K are given. It can be noticed that the JBD model remains unchanged if one substitute A by AΛP and Dk by P-1Λ-1Dk(P-1Λ-1)H, where Λ is a nonsingular block diagonal matrix in which the arbitrary blocks are the same dimensions as Dk. P is an arbitrary block-wise permutation matrix. The JBD model is essentially unique when it is only subject to these indeterminacies of amplitude and permutation [24].

To this end, our aim is to present a new algorithm to solve the problem of the non-orthogonal JBD. The cost function with neglecting the noise term suggested in [23] is considered as follows:

(3)
C J B D B = k = 1 K O f f B d i a g B R k B H F 2 .

The above cost function can be regarded as the off-diagonal-block-error, our aim is to find a non-singular B such that the CJBD(B) is as minimum as possible. Where F is the Frobenius norm and B stands for inverse of the matrix A (in BSS context, B serves as the separating matrix). Considering the square matrix Rk=(Rk,ij)CM×M:

R k = R k , 11 R k , 1 r R k , 21 R k , 2 r R k , r 1 R k , r r ,

where Rk,ij for all i,j=1,,r are ni×nj matrices (and n1+n2++nr=M) and two matrix operators Bdiag() and OffBdiag() can be respectively defined as:

B d i a g R k = R k , 11 0 1 r 0 21 0 2 r 0 r 1 R k , r r ,           O f f B d i a g R k = R k - B d i a g R k .

2.2. Two novel joint block diagonalization algorithms based LU and QR factorizations

Any non-singular matrix B admits the LU factorization [30]:

(4)
B = P Λ L U ,

where L and U are M×M unit lower and upper triangular matrices, respectively. A unit triangular matrix represents a triangular matrix with diagonal elements of one. B also admits the QR factorization:

(5)
B = Λ Q U ,

where Q is M×M orthogonal matrix. Considering the JBD model’s indeterminacies, we note that any non-singular square separating matrix can be represented as these two types of decomposition. Here, we will implement the JBD in real domain (which is the problem usually encountered in BSS) i.e., in a real triangular and orthogonal basis. It is reasonable to consider the decompositions Eq. (4) or (5) and hence replace the minimization problem represented in Eq. (3) by two alternating stages involving the following sub-optimization:

(6)
U ~ = a r g m i n k = 1 K O f f B d i a g ( U R k U T ) F 2 ,
(7a)
L ~ = a r g m i n k = 1 K O f f B d i a g ( L R k ' L T ) F 2 ,
(7b)
Q ~ = a r g m i n k = 1 K O f f B d i a g ( Q R k ' Q T ) F 2 ,

where Rk'=U~RkU~T, L~, U~ and Q~ denote the estimates of L, U and Q, respectively. Moreover, we adopt the Jacobi-type scheme to solve Eq. (6) and 7(a), 7(b) via a set of rotations.

The Jacobi matrix of lower unit triangular is denoted as Lij(a), where the parameter a corresponding to the position (i,j) (i>j) i.e., Lij(a) equals the identity matrix except the entry indexed (i,j) is a. In a similar fashion, we define the Jacobi matrix of unit upper triangular Uij(a) with parameter a corresponding to the position (i,j) (i>j). In order to solve Eq. (6) and 7(a), we will firstly find the optimal Lij(a) and Uij(a) in each iteration. For fixed a, one iteration of the method consists of

1. Solving Eq. (6) with respect to a of Uij(a), and.

Updating RkUij(a)RUij(a)T for all k (U-stage)

2. Solving Eq. (7a) with respect to a of Lij(a), and.

Updating RkLij(a)RLij(a)T for all k (L-stage)

We herein note that the proposed two non-orthogonal JBD algorithms are all of above-mentioned Jacobi-type, with the only differences on the adopted decompositions (LU or QR) and implementation details. Next, we give the details of the proposed algorithms. Following the Eq. (6), we have:

(8)
f a = a r g m i n k = 1 K O f f B d i a g ( U i j ( a ) R k U i j ( a ) T ) F 2      
                    = a r g m i n a 2 R 1 + 2 a R 2 + c o n s ,         i > L ,
        or   = a r g m i n a 2 R 3 + 2 a R 4 + c o n s ,         i L ,

where L is the block dimension, i,j=1,,M and i<j.

R 1 = k = 1 K R k ( [ L : i - 1 , i + 1 : M ] , j ) F 2 + R k ( j , [ L : i - 1 , i + 1 : M ] ) F 2 ,
R 2 = k = 1 K R k ( i , [ L : i - 1 , i + 1 : M ] ) R k ( j , [ L : i - 1 , i + 1 : M ] ) T             + k = 1 K R k L : i - 1 , i + 1 : M , i R k ( [ L : i - 1 , i + 1 : M ] , j ) T ,
R 3 = k = 1 K R k ( [ L + 1 : M ] , j ) F 2 + R k ( j , [ L + 1 : M ] ) F 2 ,
R 4 = k = 1 K R k i , L + 1 : M R k ( j , [ L + 1 : M ] ) T + k = 1 K R k L + 1 : M , i R k ( [ L + 1 : M ] , j ) T .        

For matrix R, R(i,index) denotes a row-vector whose elements are from the ith row of R indexed by index, the A is a row vector R(index,j) is defined similarly.

The computation of the optimal a~ in Eq. (8) is:

(9)
a ~ = - R 2 R 1 ,         i > L         or         - R 4 R 3 ,         i L .

If R1=0 or R3=0 set a~=0, i.e. CJBD(U) cannot be reduced by the particular Uij(a).

As for the lower triangular matrices Lij(a), we have similar result:

(10)
g a = a r g m i n k = 1 K O f f B d i a g ( L i j ( a ) R k ' L i j ( a ) T ) F 2
                    = a r g m i n a 2 R 1 ' + 2 a R 2 ' + c o n s ,           i > L ,
        or   = a r g m i n a 2 R 3 ' + 2 a R 4 ' + c o n s ,         i L ,

where L is the block dimension, i,j=1,,M and i>j:

i n d e x = 1 : i - 1 , i + 1 : M ,
i n d e x ' = i n d e x L : M - 1 ,         i L ,
i n d e x ' ' = i n d e x l ,         i > L .

l is a row vector in which element l1satisfy l1LiLl1[1:i-1,i+:M], X rounds the elements of X to the nearest integers towards infinity:

R 1 ' = k = 1 K R k ' ( i n d e x ' , j ) F 2 + R k ' ( j , i n d e x ' ) F 2 ,
R 2 ' = k = 1 K R k ' ( j , i n d e x ' ) R k ' ( i , i n d e x ' ) T + R k ' ( i n d e x ' , j ) T R k ' ( i n d e x ' , i ) ,
R 3 ' = k = 1 K R k ' ( i n d e x ' ' , j ) F 2 + R k ' ( j , i n d e x ' ' ) F 2 ,
R 4 ' = k = 1 K R k ' ( j , i n d e x ' ' ) R k ' ( i , i n d e x ' ' ) T + R k ' ( i n d e x ' ' , j ) T R k ' ( i n d e x ' ' , i ) .

The computation of the optimala~value in Eq. (10):

(11)
a ~ = - R 2 ' R 1 ' ,       i > L     or     - R 4 ' R 3 ' ,       i L .

If R1'=0 or R3'=0 set a~=0, i.e. CJBD(L) cannot be reduced by the particular Lij(a). The computation of the optimal parameter a~ requires solving a polynomial of degree 2 in the real domain, which is more effective than other JBD methods that need to solve a polynomial of degree 4, such as JBDOG, JBDORG and JBD-NCG,etc. [22-24].

In the QR algorithm, we consider the QR decomposition of Β, hence the sub-optimization problem in the Q-stage in Eq. 7(b) is indeed an orthogonal JBD problems which can be solved by Févotte’s Jacobi-type algorithm [19]. Févotte indicated that the behavior of the Jacobi approach was very much dependent on the initialization of the orthogonal basis, and also relied on the choice of the successive rotations. Here, the algorithm is initialized with the matrix QJD provided by joint diagonalization of Rk, and consists of choosing at each iteration the couple (p,q) ensuring a maximum decrease of criterion CJBD(Q).

Now that we have obtained the U-stage and L-stage (Q-stage) for the proposed algorithm, we loop these two stages until convergence is reached. In addition, we note that there would be several ways to control the convergence of the JBD algorithms. For example, we could stop the iterations when the parameter values in each iteration of the U-stage or L-stage (Q-stage) are small enough, which indicates a minute contribution from the elementary rotations, and hence convergence. We may as well monitor the sum off-diagonal squared norms CJBD(B) in iteration, and stop the loops when the change in it is smaller than a preset threshold. The values of LU-IF2<ε between two successive complete runs of U-stage and L-stage are usually used as a terminate criteria. Here, we stop the iteration when the values Ioff=k=1KOffBdiag(Rk)F2/k=1KBdiag(Rk)F2<ε, which can reflect relative change between off-block diagonal and block diagonal. And this criterion can be adapted to all of iterative methods for solving JBD problem, which can also give a more intuitive comparison between these methods. Therefore, this terminate criteria is much more rational and effective. In the following context of our manuscript, we will use one of the following terminate criteria

(st1) Ioff<10-7;

(st2) CJBDt+1-CJBDt/CJBDt<10-7;

(st3) Iconvt+1-Iconvt<10-5 (mentioned at section 4).

We name the novel JBD approaches based on LU and QR factorizations as LUJBD and QUJBD, respectively, and summarize them as following:

1. Set B=I.

2. U-stage (R-stage): set U=I, for 1i<jM

Find a~ij=argminaCJBD(Uij(a)) from Eq. (9)

Update RkUij(a~ij)RkUij(a~ij)T and UUij(a~ij)U

3. L-stage (Q-stage): set L=I(Q=QJD), for 1j<iM

Find a~ij=argminaCJBD(Lij(a)) from Eq. (11) ( θij=argminθCJBD(Uij(θ)) from [20])

Update RkLij(a~ij)RkLij(a~ij)T (RkGij(θij)RkGij(θij)T)

and LLij(a~ij)L (QGij(θij)Q)

4. If the terminate criteria from (st1), (st2) or (st3) isn’t satisfied completely, then BLUB(BQUB) and go to 2, else end.

We replace each of these M(M-1)/2 dimensional minimization problems by a sequence of simple one-dimensional problems via using triangular and orthogonal Jacobi matrices. Note that for updating Rk, the matrix multiplications can be realized by few vectors scaling and vector. In additions, this will cost fewer time than other method of non-orthogonal JBD [22-24]. And the existence and uniqueness of joint block diagonalization of this cost function has been proved in [20].

2.3. Convergence of the algorithm

By construction, the algorithm ensures decrease of criterion CJBD(B) at iteration process. According to Eq. (8) or (10), we have:

C J B D U t + 1 - C J B D U t = a ~ 2 R c + 2 a ~ R d .

Respectively Rc=R1,R3,R1', R3', Rd=R2,R4,R2',R4' and Rc>0.

Replace a~ from Eqs. (9) or (11):

(12)
C J B D ( U t + 1 ) - C J B D ( U t ) = a ~ 2 R c + 2 a ~ R d = - R d 2 R c 0 .

Only Rc=0 or Rd=0 set a~=0, Eq. (12) is putted equal sign, but usually Rc0 and Rd0 in CBSS mentioned in Section 3.

Similarly, CJBD(Lt+1)-CJBD(Lt)0, or CJBD(Qt+1)-CJBD(Qt)0 has been proved in [20].

At each iteration of the algorithm, the matrix Rkt+1 obtained after rotations is thus ‘at least as diagonal as’ matrix Rkt at previous iteration. Since every bounded monotonic sequence in real matrix domain, the convergence of our algorithm is guaranteed.

3. Application to CBSS

The problems of convolutive BSS (CBSS) occur in various applications. One typical application is in blind separation of vibration signals, which is fully studied in this paper for detecting the solution of the CBSS problems. The CBSS consists of estimating a set of unobserved source signals from their convolutive mixtures without requiring a priori knowledge of the sources and corresponding mixing system. Then the CBSS can be identified by means of JBD of a set of covariance matrices. We consider the following discrete-time MIMO model [25]:

(13)
X t = l ' = 0 L ' H l ' S t - l ' + N t ,

where t is the discrete time index, l'=1,,L',L'denotes FIR filter’s length. S(t)=s1(t),,sm(t)T denotes source signal vector with the source numbers are m, and X(t)=x1(t),,xn(t)T is the mixing signal vector obtained bynobservation signals. In the mixing linear time-invariant system, the matrix-type impulse response H(l')=h1(l'),,hm(l') consists of channel impulse responses hij(l')(i=1,,n,j=1,,m). Aiming to the received signal on the ith array element, we take the P+1 sliding window and constitute a column vector:

x i ( t ) = [ x i ( t ) , x i ( t - 1 ) , , x i ( t - P ) ] T ,           i = 1,2 , , n .

Then putting the array element processed by sliding window together and defining the observed signal vector:

X - ( t ) = [ x 1 t , x 2 t , , x n t ] T .

Hence:

(14)
X - t = H S - t + N t ,

where S-(t)=[s1(t),s2(t),,sm(t)]T, and:

s j ( t ) = [ s j t , s j t - 1 , , s j t - L ' - P + 1 ] T ,
H = H 11 H 1 m H 21 H 2 m H n 1 H n m .

H i j is block element of H which matrix dimension is (P+1)×(L'+P), and:

H i j = h i j ( 0 ) h i j ( 1 ) h i j ( L ' - 1 ) 0 0 0 h i j ( 0 ) h i j ( L ' - 2 ) h i j ( L ' - 1 ) 0 0 0 h i j ( 0 ) h i j ( L ' - 2 ) h i j ( L ' - 1 ) .

The following assumptions concerning the above mixture model Eq. (14) have to be made to ensure that it is possible to apply the proposed algorithms to CBSS [25].

Assumption 1. The source signals are zero mean, individually correlated in time but mutually uncorrelated, i.e., for all time delay τ, the cross correlation functions rsisj(t,τ)=0, ij, and the auto-correlation functions rsisi(t,τ)0, i=1,,m. Meanwhile, the source signals have a different auto-correlation function.

Assumption 2. The sensor noises N(t) are zero mean, independent identically distributed with the same power σN2. The noises are assumed to be independent with the sources.

Assumption 3. The mixing matrix H is assumed as column full rank. This requires that the length of the sliding window P satisfies n(P+1)m(P+L').

Assumption 1 is the core assumption. As is shown in [25], this assumption enables us to separate the sources from their convolutive mixtures by diagonalizing the second-order statistic of the reformulated observed signals (this will be addressed below). Assumption 2 enables us to easily deal with the noise and Assumption 3 guarantees that the mixing system is invertible, therefore it is a necessary condition that the source signals can be completely separated.

Under these assumptions, the spatial covariance matrices of the observations satisfy:

(15)
R x - t , t - τ k = E x - t x - T t - τ k = H R s - t , t - τ k H T + R N t , t - τ k ,

where τk0,1,2,,q-1 (k=1,,K) is the successive time delays, RN(t,t-τk) is computed according to [26]. It can be deduced from the above assumptions that in Eq. (15) the matrices take the following forms, respectively:

R x - τ k = R x 1 x 1 τ k R x 1 x 2 τ k R x 1 x n τ k R x 2 x 1 τ k R x 2 x 2 τ k R x 2 x n τ k R x n x 1 τ k R x n x 2 τ k R x n x n τ k R P + 1 n × P + 1 n ,
R s - τ k = R s 1 s 1 τ k 0 0 0 R s 2 s 2 τ k 0 0 0 R s m s m τ k R P + L ' m × P + L ' m ,

where the block matrices in Rx-(τk) and Rs-(τk) have the following form:

R x i x j ( τ k ) = E ( x i ( t ) x j T ( t - τ k ) )
r ( 0 , - τ k ) r ( 0 , - τ k - 1 ) r ( 0 , - τ k - P ) r ( - 1 , - τ k ) r ( - 1 , - τ k - 1 ) r ( - 1 , - τ k - P ) r ( - P , - τ k ) r ( - P , - τ k - 1 ) r ( - P , - τ k - P ) R P + 1 × P + 1 ,

where r(-a,-b)=E(xi(t-a)xjT(t-b)), Rsisi(τk) have the similar form, which is the (L'+P)×(L'+P) matrix. According to the Eq. (15), a group of matrices R=Rk, k=1,,K which can be block diagonalized, and satisfy Rk=ADkAT, Dk have diagonalization structure. Hence, The JBD method mentioned in section 2.2 can be used to solve CBSS problem. Once the joint block diagonalizer B is determined, the recovered signals are obtained up to permutation and a filter by:

(16)
S ^ t = B X - t .

It is worth mentioning that the indeterminacies of amplitude and permutation exist in JBD algorithms correspond to the well-known indeterminacies in CBSS. The correlation matrices Rx-(τk) is actually replaced by their discrete time series estimate. To acquire a good estimate of the discrete correlation matrices, we may divide the observed sequences (the output of the reformulated model (15)) into the appropriate length of the sample.

4. Numerical simulations

Simulations are now provided to illustrate the behavior and the performance of the proposed JBD algorithms (LUJBD, QRJBD). We will also compare the proposed algorithms with the JBDOG, JBDORG in the robustness and efficiency by generating random dates. To achieve these purposes, a set of real block-diagonal matrices Dk (for all k=1,,K) are devised from random entries with a Gaussian distribution of zero mean and unit variance. Then, random noise entries with a Gaussian distribution of zero mean and variance σN2 will be added on the off-diagonal blocks of the previous matrices Dk. A signal to noise ratio can be defined as SNR=10log(1/σN2).

To measure the quality of the different algorithms, the following performance index is used [22]:

I c o n v G = 1 r r - 1 i = 1 r j = 1 r G i j F 2 m a x l G i l F 2 - 1 + j = 1 r i = 1 r G i j F 2 m a x l G l j F 2 - 1 ,

where Gij (i,j1,,r) denotes the i,jth block matrix of G=B~A. This index will be used in the CBSS, which can take into account the inherent indetermination of the BSS problem. It is clearly that the smaller the index performance Iconv(G), the better the separation quality. Regarding to the charts, Iconv(G) is often given in dB i.e., Iconv(G)dB=10log(Iconv(G)). In all the simulations, the true mixing matrix A or H has been randomly chosen with zero mean and unit variance.

In Fig. 1 and Fig. 2, we focus on the exactly determined case M= 9, L= 3, K= 20, SNR= 60 dB, the results have been averaged over 100 runs. Fig. 1 represents the percentage of successful runs, where a run is declared successful w.r.t. the following criteria satisfy: Iconv<10-7, (st1), (st2). Fig. 2 represents the average running time per successful convergence. Comparing the LUJBD and QRJBD approaches with the state-of-the-art JBDOG, and JBDORG., we can confirm that the approaches proposed in Section 2.2 improve the performance of JBD better than the gradient-based methods: the LUJBD and QRJBD methods converge to the global minimum more frequently, see Fig. 1, and faster see Fig. 2 than JBDOG, JBDORG. Under the same terminated criteria, it can be observed that the LUJBD and QRJBD methods also outperform the JBDOG, JBDORG methods, and the LUJBD method show the best performance. We can also conclude that the sensitivity of the different convergence termination criteria for different JBD methods is diverse and, moreover, the percentage of successful runs and average running time are also varying. In other words, we should choose appropriate terminate criteria which is able to obtain the accuracy of block diagonalization, goodness of success rate and convergence speed.

In Fig. 3 we focus on the exactly determined case M= 9, L= 3, K= 20, SNR= 60 dB, the results have been averaged over 100 runs. Because various approaches have different iteration time of each step, Here, we consider all methods converge to the same time, which is more reasonable than converge to certain iteration steps mentioned in [23, 25]. The evolution of the performance index versus the convergence time shows that the convergence performance of the LUJBD and QRJBD methods is better than the JBDOG and JBDORG method. The LUJBD and QRJBD algorithms cost less time when performance index Iconv reaches a stable convergence level, and have smaller value of performance index Iconv when all algorithms converge to same time. In other words, the BSS methods proposed in this paper possess less convergence time, higher precision of convergence, faster convergence speed.

Fig. 1. The percentage of successful runs

 The percentage  of successful runs

Fig. 2. The average running time per successful convergence

 The average running time  per successful convergence

Fig. 3. Average performance index Iconv versus the convergence time for LUJBD, QLJBD, JBDOG, JBDORG algorithms. SNR=60 and K= 20 matrices

 Average performance index Iconv versus the convergence time for LUJBD, QLJBD, JBDOG, JBDORG algorithms. SNR=60 and K= 20 matrices

In Fig. 4, we discuss the number of the matrices how to affect the performance. The results have been averaged over 100 runs. We devise the same stop criteria i.e., one of the terminate criteria (st1), (st2) and (st3) is satisfied. We set M= 9, L= 3, SNR= 60 dB. The following observations can be made: the more matrices to be joint block-diagonalized, the better performance we can obtain. But the computational cost also increases when the number of matrix rises. Therefore, the choice of matrix number should combine the accuracy of JBD algorithms with complexity of JBD algorithms. From Fig. 4, the matrix number 20 is a better choice. The LUJBD algorithm with better convergence turned out to be slightly superior to other JBD algorithms.

Fig. 4. Average Iconv versus the number of matrices for LUJBD, QLJBD, JBDOG, JBDORG algorithms. SNR= 60 and avarage 100 runs

 Average Iconv versus the number of matrices for LUJBD, QLJBD, JBDOG, JBDORG algorithms. SNR= 60 and avarage 100 runs

Fig. 5. Average Iconv versus SNR for LUJBD, QLJBD, JBDOG, JBDORG algorithms. K= 20 and avarage 100 runs

 Average Iconv versus SNR for LUJBD, QLJBD, JBDOG, JBDORG algorithms.  K= 20 and avarage 100 runs

With the same stop criteria and other assumptions in Fig. 4, one can observe from Fig. 5 that when the SNR grows, the average performance of each algorithm becomes better except few fluctuation points. The noise sensitivity of LUJBD is slightly higher that the remaining three kinds of methods, however, for a given value of SNR, the average performance index of LUJBD and QLJBD is always better than that of two gradient-based methods.

Finally, in Fig. 6 and Fig. 7, we study separation performance versus matrix dimension M and the block dimension L for both algorithms (K= 20, SNR= 60 dB), the results have been averaged over 100 runs. One can observe that in the same block dimension L case, the larger the matrix dimension M, the weaker the estimation accuracy of mixing matrix. And in the same matrix dimension M case, the larger the block dimension L, the better the estimation accuracy of mixing matrix. Therefore, we can improve the performance index by increasing the number of the matrix K and the block dimension L when the dimension of target matrix increases.

Fig. 6. Average Iconv versus the convergence time with different M and L for LUJBD algorithms. SNR=60 and K= 20 matrices

 Average Iconv versus the convergence time with different M and L for LUJBD algorithms. SNR=60 and K= 20 matrices

Fig. 7. Average Iconv versus the convergence time with different M and L for QLJBD algorithms. SNR=60 and K= 20 matrices

 Average Iconv versus the convergence time with different M and L for QLJBD algorithms. SNR=60 and K= 20 matrices

5. Applying CBSS to the vibration source separation

The experimental model is a double-stiffened cylindrical shell depicted in Fig. 8, which is used to simulate the cabin model. Underwater vibration tests of the double-stiffened cylindrical shell were carried out in anechoic water pool with a length of 16 meters, a width of 8 meters and a height of 8 meters. In the double-stiffened cylindrical shell, three exciters were arranged in the front part (No. 1 excitation source), middle part (No. 2 excitation source), rear part (No. 3 excitation source), respectively, which were used to simulate vibration sources of the internal equipment. Twenty-nine vibration acceleration sensors were arranged in the inner shell, and four accelerometers containing abundant vibration information were arranged in the vicinity of each excitation point. The location of exciters and acceleration sensors were shown in Fig. 9. Only the vertical excitation and response were considered in this test, and the model was located underwater 3.75 m. During the test, three exciters were controlled on the shore, and each exciter was turned on separately or multiple exciters were operated simultaneously according to different test requirements. Three exciters launched a continuous sinusoidal signal with different excitation frequencies and same energy, the frequency was 5 kHz, 4 kHz, 3 kHz, corresponding to No. 1 exciter, No. 2 exciter and No. 3 exciter. The vibration data was collected when the exciter was in stable operation. The sampling frequency was 16384 Hz and the sampling time was 10 s.

Fig. 11(d), (e), (f) show the mixture signals obtained by three sensors (17, 20, 23) on the inner shell when all of the exciters act simultaneously. It is obvious that the mix-signals with mutual spectrum aliasing are not able to represent real vibration characteristics and be utilized directly. We can also demonstrate that a mixture of vibrations is most often of the convolutive type which is not prone to be tackled, and moreover, the independence among the vibration sources is often not satisfactory strictly. Therefore, it is difficult to separate mechanical vibration source using traditional source separation methods. However, we propose the Jacobi-type JBD algorithms based on the LU or QR factorizations in this article, which can overcome above shortcomings effectively.

To ensure the stability of the solution, the terminate criteria ε=abs(Iofft+t1-Iofft+t1+1)<10-6(t1=0,,9) is selected. We select observed signals n= 5 i.e., sensors 17, 20, 23, 26, 29,and source signals m= 3. The model parameters including L', P, K are selected to guarantee that the solution accuracy satisfy Ioff –35 dB. The filter length L'= 13, the sliding window P= 17 and a set of covariance matrices K= 30 with a time lag τk taking linearly spaced valued.

In Fig. 10, the evolution of the performance index Ioff versus the convergence time shows that the convergence performance of the LUJBD and QRJBD methods are superior to the JBDOG and JRJBD methods [20]. The LUJBD and QRJBD algorithms cost less time when Ioff< –20 dB, and have smaller performance index Ioff when the convergence time is same. In other words, the BSS methods proposed in this paper possess higher precision of convergence, faster convergence speed. The low computational accuracy and efficiency of the latter two algorithms are mainly due to following reasons: (1) the JBDOG algorithm generally suffers from slow convergence rate and is apt to converge to a local minimum; and the accuracy of blind source separation is often hindered by the inversion of ill-conditioned matrices. (2) the joint block-diagonalizer of JRJBD algorithm is an orthogonal matrix, the spatial pre-whitening which is likely to lead to a larger error need to be applied. Moreover, this error is unable to correct in the subsequent analysis.

Fig. 11(a), (b), (c) represents source signals acquired by the acceleration sensors near excitation point (Here, we choose acceleration sensors 1, 8, 14 which have a higher signal-to-noise ratio) when each of the exciter act respectively. Comparison between the recovered primary sources – see Fig. 11(g), (h), (i) for LUJBD model and (j), (k), (l) for QRJBD model – with the true one shows that the proposed separation algorithms only subjected to indeterminacies of the permutation and amplitude are effective.

Fig. 8. The experimental cabin model

 The experimental cabin model

Fig. 9. The location of exciters and acceleration sensors

 The location of exciters  and acceleration sensors

Fig. 10. Average performance index Iconv versus the convergence time for four algorithms. K= 30, L'= 13, P= 17

 Average performance index Iconv versus  the convergence time for four algorithms.  K= 30, L'= 13, P= 17

Fig. 11. Separation results for known channel-Sources: a) s1, b) s2, c) s3. Mixtures: d) x1, e) x2, f) x3. Separation sources for LUJBD: g) s1', h) s2', i) s3'. Separation sources for QLJBD: j) s1'', k) s2'', l) s3''

 Separation results for known channel-Sources: a) s1, b) s2, c) s3.  Mixtures: d) x1, e) x2, f) x3. Separation sources for LUJBD: g) s1', h) s2', i) s3'.  Separation sources for QLJBD: j) s1'', k) s2'', l) s3''

a)

 Separation results for known channel-Sources: a) s1, b) s2, c) s3.  Mixtures: d) x1, e) x2, f) x3. Separation sources for LUJBD: g) s1', h) s2', i) s3'.  Separation sources for QLJBD: j) s1'', k) s2'', l) s3''

b)

 Separation results for known channel-Sources: a) s1, b) s2, c) s3.  Mixtures: d) x1, e) x2, f) x3. Separation sources for LUJBD: g) s1', h) s2', i) s3'.  Separation sources for QLJBD: j) s1'', k) s2'', l) s3''

c)

 Separation results for known channel-Sources: a) s1, b) s2, c) s3.  Mixtures: d) x1, e) x2, f) x3. Separation sources for LUJBD: g) s1', h) s2', i) s3'.  Separation sources for QLJBD: j) s1'', k) s2'', l) s3''

d)

 Separation results for known channel-Sources: a) s1, b) s2, c) s3.  Mixtures: d) x1, e) x2, f) x3. Separation sources for LUJBD: g) s1', h) s2', i) s3'.  Separation sources for QLJBD: j) s1'', k) s2'', l) s3''

e)

 Separation results for known channel-Sources: a) s1, b) s2, c) s3.  Mixtures: d) x1, e) x2, f) x3. Separation sources for LUJBD: g) s1', h) s2', i) s3'.  Separation sources for QLJBD: j) s1'', k) s2'', l) s3''

f)

 Separation results for known channel-Sources: a) s1, b) s2, c) s3.  Mixtures: d) x1, e) x2, f) x3. Separation sources for LUJBD: g) s1', h) s2', i) s3'.  Separation sources for QLJBD: j) s1'', k) s2'', l) s3''

g)

 Separation results for known channel-Sources: a) s1, b) s2, c) s3.  Mixtures: d) x1, e) x2, f) x3. Separation sources for LUJBD: g) s1', h) s2', i) s3'.  Separation sources for QLJBD: j) s1'', k) s2'', l) s3''

h)

 Separation results for known channel-Sources: a) s1, b) s2, c) s3.  Mixtures: d) x1, e) x2, f) x3. Separation sources for LUJBD: g) s1', h) s2', i) s3'.  Separation sources for QLJBD: j) s1'', k) s2'', l) s3''

i)

 Separation results for known channel-Sources: a) s1, b) s2, c) s3.  Mixtures: d) x1, e) x2, f) x3. Separation sources for LUJBD: g) s1', h) s2', i) s3'.  Separation sources for QLJBD: j) s1'', k) s2'', l) s3''

j)

 Separation results for known channel-Sources: a) s1, b) s2, c) s3.  Mixtures: d) x1, e) x2, f) x3. Separation sources for LUJBD: g) s1', h) s2', i) s3'.  Separation sources for QLJBD: j) s1'', k) s2'', l) s3''

k)

 Separation results for known channel-Sources: a) s1, b) s2, c) s3.  Mixtures: d) x1, e) x2, f) x3. Separation sources for LUJBD: g) s1', h) s2', i) s3'.  Separation sources for QLJBD: j) s1'', k) s2'', l) s3''

l)

6. Conclusions

In this article, to solve the convolutive BSS (CBSS) problem, we present a class of simple Jacobi-type JBD algorithms based on the LU or QR factorizations. Using Jacobi-type matrices we can replace high dimensional minimization problems with a sequence of simple one-dimensional problems. The two novel methods named LUJBD and QRJBD are no more necessarily orthogonal, positive definite or symmetric matrices. In addition, we propose a novel convergence criteria which can reflect relative change between off-block diagonal and block diagonal. And this criterion can be adapted to all of iterative methods for solving JBD problem, which can also give a more intuitive comparison between different methods. The computation of the optimal parameter requires solving a polynomial of degree 2 in the real domain in LUJBD and QRJBD methods, which is more effective than other JBD methods that need to solve polynomial of degree 4. Therefore, the two new algorithms will cost fewer times than other non-orthogonal JBD methods, and moreover, the convergence of these two algorithms is also guaranteed.

A series of comparisons of the proposed approaches with the state-of-the-art JBD approaches (JBDOG and JBDORG) based on gradient algorithms are implemented by varieties of numerical simulations. The results show that the LUJBD and QRJBD methods converge to the global minimum more frequently, and faster than JBDOG, JBDORG. Choosing appropriate terminate criteria is beneficial to obtain the accuracy of block diagonalization, goodness of success rate and convergence speed. It can be readily observed that the more target matrices selected to be joint block-diagonalized, the better performance we can obtain. But the computational cost also increases when the number of matrix rises. Therefore, the choice of matrix number should combine algorithm accuracy with complexity. We can also improve the performance index by increasing the block dimension and decreasing the matrix dimension. Then the two novel JBD algorithms and the JBDOG, JRJBD methods for separating practical vibration sources are studied. We can conclude that the convergence performance and accuracy of the LUJBD and QRJBD methods are superior to the JBDOG and JRJBD methods. Finally, Comparison the recovered primary sources with the true one demonstrates the validity of the proposed algorithms for separating the vibration signals of convolutive mixtures.

Acknowledgements

The work described in this paper was supported by the National Natural Science Foundation of China (No. 51609251). Research on the identification of cross-coupling mechanical vibration sources of warship based on transfer path analysis.

References

  1. Castell M., Bianchi P., Chevreuil A., et al. A blind source separation framework for detecting CPM sources mixed by a convolutive MIMO filter. Signal Processing, Vol. 86, 2006, p. 1950-1967. [Search CrossRef]
  2. Zhang Y., Zhao Y. X. Modulation domain blind speech separation in noisy environments. Speech Communication, Vol. 55, Issue 10, 2013, p. 1081-1099. [Search CrossRef]
  3. Pelegrina G. D., Duarte L. T., Jutten C. Blind source separation and feature extraction in concurrent control charts pattern recognition: novel analyses and a comparison of different methods. Computers and Industrial Engineering, Vol. 92, 2016, p. 105-114. [Search CrossRef]
  4. Popescu T. D. Blind separation of vibration signals and source change Detection: Application to machine monitoring. Applied Mathematical Modelling, Vol. 34, 2010, p. 3408-3421. [Search CrossRef]
  5. McNeill S. I., Zimmerman D. C. Relating independent components to free-vibration modal responses. Shock and Vibration, Vol. 17, 2010, p. 161-170. [Search CrossRef]
  6. Lee D. S., Cho D. S., Kim K., et al. A simple iterative independent component analysis algorithm for vibration source signal identification of complex structures. International Journal of Naval Architecture and Ocean Engineering, Vol. 7, Issue 1, 2015, p. 128-141. [Search CrossRef]
  7. Li Y. B., Xu M. Q., Wei Y., et al. An improvement EMD method based on the optimized rational Hermite interpolation approach and its application to gear fault diagnosis. Measurement, Vol. 63, 2015, p. 330-345. [Search CrossRef]
  8. Popescu T. D. Analysis of traffic-induced vibrations by blind source separation with application in building monitoring. Mathematics and Computers in Simulation, Vol. 80, 2010, p. 2374-2385. [Search CrossRef]
  9. Babaie-Zadeh M., Jutten C. A general approach for mutual information minimization and its application to blind source separation. Signal Processing, Vol. 85, 2005, p. 975-995. [Search CrossRef]
  10. Jing J. P., Meng G. A novel method for multi-fault diagnosis of rotor system. Mechanism and Machine Theory, Vol. 44, 2009, p. 697-709. [Search CrossRef]
  11. Antoni J. Blind separation of vibration components: principles and demonstrations. Mechanical Systems and Signal Processing, Vol. 19, 2005, p. 1166-1180. [Search CrossRef]
  12. Rhabi M. E., Fenniri H., Keziou A., et al. A robust algorithm for convolutive blind source separation in presence of noise. Signal Processing, Vol. 93, 2013, p. 818-827. [Search CrossRef]
  13. Bousbia-Salah H., Belouchrani A., Abed-Meraim K. Blind separation of convolutive mixtures using joint block diagonalization. International Symposium on Signal and its Applications(ISSPA), Kuala Lumpur, Malaysia, 2001, p. 13-16. [Publisher]
  14. Flury B. D., Neuenschwander B. E. Simultaneous diagonalization algorithm with applications in multivariate statistics. International Series of Numerical Mathematics, Vol. 19, 1994, p. 179-205. [Search CrossRef]
  15. Pham D. T. Blind separation of cyclostationary sources using joint block approximate diagonalization. Lecture Notes in Computer Science, Vol. 4666, 2007, p. 244-251. [Publisher]
  16. Belouchrani A., Amin M. G., Abed-Meraim K. Direction finding in correlated noise fields based on joint block-diagonalization of spatio-temporal correlation matrices. IEEE-SP Letters, 1997, p. 266-268. [Search CrossRef]
  17. Belouchrani A., Abed-Meraim K., Hua Y. Jacobi like algorithms for joint block diagonalization: Application to source localization. Proceedings of IEEE International Workshop on Intelligent Signal Processing and Communication Systems, Melbourne, 1998. [Search CrossRef]
  18. Abed-Meraim K., Belouchrani A. Algorithms for joint block diagonalization. 12th European Signal Processing Conference, Vienna, Austria, 2004, p. 209-212. [Search CrossRef]
  19. Févotte C., Theis F. J. Pivot selection strategies in Jacobi joint block-diagonalization. Lecture Notes in Computer Science, Vol. 4666, 2007, p. 177-184. [Search CrossRef]
  20. Févotte C., Theis F. J. Orthonormal Approximate Joint Block-Diagonalization. Technical Report GET/Telecom Paris 2007D007, 2007, p. 1-24. [Search CrossRef]
  21. Ghennioui H., Fadaili E. M., Thirion Moreau N., et al. A non-unitary joint block diagonalization algorithm for blind separation of convolutive mixtures of sources. IEEE Signal Processing Letters, Vol. 14, Issue 11, 2007, p. 860-863. [Search CrossRef]
  22. Ghennioui H., Thirion Moreau N., Moreau E., et al. Non-unitary joint-block diagonalization of complex matrices using a gradient approach. Lecture Notes in Computer Science, Vol. 4666, 2007, p. 201-208. [Publisher]
  23. Ghennioui H., Thirion-Moreau N., Moreau E., et al. Gradient-based joint block diagonalization algorithms: application to blind separation of FIR convolutive mixture. Signal Processing, Vol. 90, 2010, p. 1836-1849. [Search CrossRef]
  24. Nion D. A tensor framework for nonunitary joint block diagonalization. IEEE Transactions on Signal Processing, Vol. 59, Issue 10, 2011, p. 4585-4594. [Search CrossRef]
  25. Zhang W. T., Lou S. T., Lu H. M. Fast nonunitary joint block diagonalization with degenerate solution elimination for convolutive blind source separation. Digital Signal Processing Vol. 22, 2012, p. 808-819. [Search CrossRef]
  26. Xu X. F., Feng D. Z., Zheng W. X. Convolutive blind source separation based on joint block Toeplitzation and block-inner diagonalization. Signal Processing, Vol. 90, 2010, p. 119-133. [Search CrossRef]
  27. Zhang W. T., Lou S. T. A recursive solution to nonunitary joint diagonalization. Signal Processing, Vol. 93, 2013, p. 313-320. [Search CrossRef]
  28. Cheng G. H., Li S. M., Moreau E. New Jacobi-like algorithms for non-orthogonal joint diagonalization of Hermitian matrices. Signal Processing, Vol. 128, 2016, p. 440-448. [Search CrossRef]
  29. Zeng Feng T. J. Q. Y. Non-orthogonal joint diagonalization algorithm based on hybrid trust region method and its application to blind source separation. Neurocomputing, Vol. 133, 2014, p. 280-294. [Search CrossRef]
  30. Afsari B. Simple LU and QR based non-orthogonal matrix joint diagonalization. Lecture Notes in Computer Science, Vol. 3889, 2006, p. 1-7. [Publisher]
  31. Gong X. F., Wang K., Lin Q. H., et al. Simultaneous source localization and polarization estimation via non-orthogonal joint diagonalization with vector-sensors. Sensors, Vol. 12, 2012, p. 3394-3417. [Search CrossRef]