With the advent of the era of big data information, artificial intelligence (AI) methods have become extremely promising and attractive. It has become extremely important to extract useful signals by decomposing various mixed signals through blind source separation (BSS). BSS has been proven to have prominent applications in multichannel audio processing. For multichannel speech signals, independent component analysis (ICA) requires a certain statistical independence of source signals and other conditions to allow blind separation. independent vector analysis (IVA) is an extension of ICA for the simultaneous separation of multiple parallel mixed signals. IVA solves the problem of arrangement ambiguity caused by independent component analysis by exploiting the dependencies between source signal components and plays a crucial role in dealing with the problem of convolutional blind signal separation.
1. Introduction
With the advent of the era of big data information, people’s access to information has become more and more abundant. However, researchers usually only obtain the mixed information collected from the receiver, and the whole mixed information needs to be separated or extracted from the latent signals. The subsequent problem is how to effectively obtain useful signals from the received signals, which leads to the technology related to blind source separation (BSS)
[1].
The theory of BSS can be traced back to the cocktail party problem, which has attracted much attention for decades. The cocktail party problem is when you are at a cocktail party and there are all kinds of people chatting around, but you can only concentrate on one of the discussions, or focus on the conversation of one of the people. BSS theory refers to observing the mixed signals of different sources and using these mixed signals to restore the original signal, and the prior information of the source signal and its mixed signal is minimal. A large number of applications of BSS in communication, speech, and medical signal processing has received extensive attention in recent years
[2]. It is of great significance to realize blind estimation, blind equalization, and adaptive signal processing through blind characteristics.
Independent component analysis
[3][4][5] (ICA) is one of the most important methods first proposed to deal with BSS. This is a classic BSS technology based on statistical independence of source signals and is the mainstream technology of BSS. ICA requires that source signals be statistically independent of each other. It is an unsupervised, data-driven signal processing technique based on non-Gaussian maximization to separate time-invariant mixture signals in the time domain.
However, consider that in a real scenario, the signal is often mixed with reverberation in the form of convolution. However, ICA cannot separate the common form of convolution mixing. Moreover, the convolution mixed signal is processed in the time domain with high computational complexity and a huge amount of computation, and the convergence speed is slow, which greatly reduces the separation performance. Taking advantage of the properties of convolution mixing: the convolution in the time domain is equal to the product in the frequency domain, a frequency domain ICA
[6][7] (FD-ICA) algorithm is proposed. The entire convolutional mixed signal is converted from the time domain to the frequency domain for separation by the short-time Fourier transform (STFT). Compared with the time-domain convolution operation, the frequency-domain product operation has the advantages of convenient calculation, small computational complexity, and fast convergence speed.
To solve the above-mentioned problems of ICA, the independent vector analysis (IVA)
[8][9] algorithm is proposed. It generalizes ICA to multiple datasets by exploiting statistical dependencies across datasets, addressing some of the uncertainty in the output of signal separation.
2. Optimizing IVA Algorithm—Optimizing Update Rules
GD
[10] is one of the most primitive optimization algorithms. Gradient descent is a method that minimizes
I by updating the model parameter in the opposite direction of the gradient of the objective function
I. The learning rate
η determines the size of the step size chosen to reach the local minimum, in other words, the descending hill along the slope of the surface produced by the objective function until a valley is reached. This is a separation method obtained by minimizing (1), a simple GD method is extrapolated as follows:
Its main variants are batch gradient (BG), stochastic gradient (SG), and natural gradient (NG). Among them, the NG algorithm
[11][12] is an effective and one of the most commonly used algorithms to solve the problem of BSS. The main idea is to take the NG direction of the objective function
I as the iterative direction so that the algorithm can quickly converge, so as to realize the separation of source signals. Additionally, it is proved that the best descent direction is not the “negative” regular gradient direction but the "negative" Riemann gradient. It was first proposed in
[13][14], and its main idea is to multiply the scaling matrix
Q(k) to modify the gradient in the original GD method to obtain faster convergence speed. As Equation (2):
The update for the separation matrix is:
when solving the objective function. The choice of step size will directly affect the convergence speed and accuracy. In order to speed up the convergence speed of the algorithm, many scholars have also optimized and improved the classical NG algorithm. In 2011, Liang et al.
[15] proposed a control mechanism that considers the step size to obtain fast and stable convergence. In 2011, Zhang et al.
[16] proposed an NG blind separation algorithm that directly estimates the score function through function approximation, which uses a linear combination of a set of orthogonal polynomials to approximate the score function, and its performance is measured by the mean squared error. An improved momentum term method was proposed in
[17] which can speed up the algorithm’s convergence.
In 2018, Fu et al.
[18] proposed a blind separation algorithm for IVA based on step-size adaptation. The algorithm initializes the separation matrix using the feature matrix joint approximate diagonalization algorithm and adaptively optimizes the step-size parameter. That is, to avoid local convergence, it can also significantly improve the convergence speed of the algorithm and further improve the separation performance. According to the relationship between the iteration step size and the estimated cost function change. In 2012, Wang et al.
[19] proposed a variable-step-size IVA gradient algorithm based on the most block speed step-size descent. Additionally, according to the relationship between the iterative step size and the change in the separation matrix to be obtained, a variable-step-size IVA gradient algorithm based on the estimation function is proposed. In 2010, Kim
[12] proposed a modified gradient and normalized IVA method with nonfully closed constraints. Gradient normalization improves the convergence speed, and nonholographically constrained gradients with lower computational complexity show better performance, while possessing simpler structures compared with other methods. In 2018, Koldovský et al.
[20], based on the independent vector extraction (IVE) of the IVA algorithm, proposed an IVE algorithm with an adaptive step-size method in complex non-Gaussian scenarios to speed up convergence.
3. Fast Fixed Point Method
The fast fixed point method was derived by introducing Newton’s method. The iterative update rule based on fast fixed point
[21] was first proposed to optimize the objective function of ICA. It provides a very simple algorithm, one that does not depend on any defined parameters and that quickly converges to the most accurate update rule the data allow.
When optimizing a negative entropy-based objective function, the easiest way is to use GD. Although the GD-based method has a good separation effect, it is relatively simple to use. The overall convergence speed of this method is slow and depends on a good choice of the learning rate sequence, i.e., the step size per iteration. Although various optimizations for the step-size factor were summarized in the previous section, GD methods rely on a suitable step size for separation.
Therefore, in practical applications, it is very important to make the entire convergence process faster and more reliable. Therefore, a fast fixed point iterative algorithm
[22] is proposed to achieve this. In fixed point algorithms, the entire computation is performed in batch or block mode, i.e., a large number of data points are used in one step of the algorithm. The fast fixed point algorithm has very attractive convergence properties, and in experiments, it converges much faster than the commonly used GD method. At the same time, in environments where fast real-time adaptation is not required, this method is a good alternative to adaptive learning rules. In 1997, Hyvarinen
[23] described a more heuristic derivation of it.
In 2000, Bingham et al.
[24] proposed a FastICA algorithm capable of separating complex-valued linear mixed-source signals. The method shows good performance in the ICA algorithm. The same
[25] generalized fast fixed point method to the IVA algorithm, which was developed based on the idea of FastICA and used to optimize the traditional IVA algorithm. Under this method, the update is expressed as:
where E denotes the expectation, G(⋅) denotes a nonlinear function, and
After the updated matrix W is obtained through the update rule, decorrelation needs to be performed to ensure orthogonality as follows:
where
(⋅)H denotes the conjugate transpose of
(⋅). To be able to directly apply Newton’s method to derive a fast algorithm for complex variables, a quadratic Taylor polynomial is introduced into the complex notation. Using this form of Taylor series expansion makes the derivation simpler and is useful for directly applying Newton’s method to objective functions of complex-valued variables. In 2000, Yan et al.
[26] provided an independent equivalent.
Recently, in 2021, Koldovský et al.
[27] proposed an extended fast dynamic independent vector analysis (FastDIVA) algorithm based on the FastICA and FastIVA static hybrid algorithms, used to blindly extract or separate one or more signal sources from a time-varying mixed signal. In a source-by-source separation mixture model that allows the desired source to move, the mixture is either in series or in parallel. The algorithm inherits the advantages of FastIVA, exhibits good performance in motion source separation, and exhibits superior convergence speed and ability to separate super-Gaussian and sub-Gaussian signals.
In 2021, Amor et al.
[28] used FastDIVA for blind source extraction for mixture models with constant separation vector CSV. Additionally, it shows new potential and good separation performance in three environments: motion loudspeaker in a noisy environment, extraction of motion brain activity, and motion source. In 2021, Koldovský et al.
[29] proposed a new dynamic IVA algorithm. It is based on a mixed model in which the source-of-interest (SOI)-related mixing parameters are time-varying, and the separation parameters are time-invariant. The Newton–Raphson method is used to optimize the objective function based on the quasi-likelihood method, then the iterative update is performed without imposing orthogonality constraints, and then orthogonality is performed. This algorithm is an optimization of the fast fixed point algorithm, which is better than the gradient algorithm and the auxiliary function method in performance.
4. Auxiliary Function
The update method based on the auxiliary function technology is also a method that does not include tuning parameters such as step size, which is an iterative algorithm with a convergence guarantee. This is a stable and fast update rule derived from the majorize-minimization principle
[30][31]. Find its minimum by exploiting the convexity of the function. When the objective function
f(θ)
is difficult to optimize, and the optimization algorithm used cannot directly find the optimal solution to the objective function, an easy-to-optimize objective function
g(θ) can be found instead. Then, the substitution function is solved, and the optimal solution of
g(θ) is close to the optimal solution of
f(θ). In each iteration, a new surrogate function for the next iteration is reconstructed from the solution. Then, the new substitute function is optimized and solved to obtain the objective function of the next iteration. After several iterations, the optimal solution that is closer and closer to the original objective function that can be obtained. It was first proposed in the literature
[32] to accelerate the convergence speed of the ICA algorithm. This rule consists of two optional updates:
-
The update of the weighted covariance matrix (that is, the auxiliary function variable).
-
The update of the separation matrix ensures that the objective function decreases monotonically at each update and finally achieves convergence.
Equation (7) is the auxiliary function variable update:
Among them, Vn denotes a covariance matrix of the observed signals, U(⋅) denotes a continuous and differentiable function of a real variable · satisfying, and U′(⋅) usually takes the constant 1. ∥⋅∥2 denotes the 2-norm of ·. Equation (8) is the update of the unmixing matrix:
In 2011, Nobutaka Ono
[33] used the auxiliary function technique in the objective function of the IVA algorithm and similarly derived an efficient update rule suitable for the IVA algorithm, called AuxIVA. In 2012, Nobutaka Ono
[34] proposed an AuxIVA algorithm based on a generalized Gaussian source model or a Gaussian source model with time-varying variance. In 2012 and 2013, Nobutaka Ono
[35][36] proposed a faster algorithm that can update two separation vectors simultaneously by solving the generalized eigenvalue problem for the AuxIVA algorithm with two sources and two microphones. Compared with the one-by-one update method, this method has faster convergence speed and better performance. This pairwise update method is also applicable to the pairwise separation of vectors in the case of three or more sources
[37]. In 2014, Taniguchi et al.
[38] used the AuxIVA algorithm based on the auxiliary function method for online real-time blind speech separation. In experimental comparisons with commonly used real-time IVA algorithms, the proposed online algorithm achieves a higher signal-to-noise ratio without environment-sensitive tuning parameters such as step factor.
In 2021, Brendel et al.
[39] further optimized the IVA algorithm based on auxiliary functions under the same computational cost. The convergence speed of the AuxIVA algorithm is enhanced by three methods:
-
Turn the differential term into a tuning parameter via the differential term in the NG approximation algorithm.
-
Approximate the differential term as a matrix using the quasi-Newton method.
-
Use the square iteration method to speed it up.
5. EM Method
In signal processing, a common problem is estimating the parameters of a probability distribution function. The situation is more complicated in many parameter estimation problems because the data needed to estimate the parameters are not directly accessible, or some data are missing. EM-based optimization algorithms are well-suited for solving this class of problems because the EM algorithm produces maximum likelihood (ML) estimates of the parameters when there is a many-to-one mapping from the underlying distribution to the distribution of the control observations, while taking additive noise into account. The EM algorithm overcomes the problem of unanalyzable solutions and has been widely used in statistics, signal processing, and machine learning
[40].
The EM algorithm is an iterative optimization method
[41] that is used to estimate some unknown parameters given measurement data. The solution is divided into two steps.
E-step: First assign an initial distribution to each hidden variable empirically, that is, assume distribution parameters. Then, according to the parameters of the distribution, the expectation of the hidden variables in each data tuple can be obtained, that is, the classification operation is performed. The posteriors of the source signal can be obtained by
where ∝ denotes it is proportional to the previous term, and q denotes posterior probability.
M-step: Calculate the maximum likelihood value of the distribution parameter (vector) based on the classification result, and then in turn recalculate the expectation of the hidden variable for each data tuple based on this maximum likelihood value. The update rules for mixing matrices A are
where
<⋅>q denotes expectation over
q.
Through the repetition of the above two steps, when the expectation of the hidden variable and the maximum likelihood value of the parameter tends to be stable, the entire iteration is completed.
In 2004 and 2008, Varadhan et al.
[42][43] used the square iteration method in the EM algorithm to accelerate its convergence speed. In 2008, Lee et al.
[44] deduced the expectation-maximization algorithm, and the algorithm was used in the updated iteration of the IVA algorithm. The EM algorithm could estimate the parameters of the separation matrix and the unknown source at the same time, showing a good separation performance. In 2010, Hao et al.
[45] proposed a unified probabilistic framework for the IVA algorithm with the Gaussian mixture model as the source prior model; this flexible prior source enables the IVA algorithm to separate different types of signals, deduce different EM algorithms, and test three models: noiseless IVA, online IVA, and noise IVA. The EM algorithm can effectively estimate the unmixing matrix without sensor noise. In online IVA, an online EM algorithm is derived to track the motion of the source under nonstationary conditions. Noise IVA includes sensor noise and denoising combined with separation. An EM algorithm suitable for this model is proposed which can effectively estimate the model parameters and separate the source signal at the same time.
In 2019, Gu et al.
[46] proposed a Gaussian mixture model IVA algorithm with time-varying parameters to accommodate temporal power fluctuations embedded in nonstationary speech signals, thus avoiding the pretraining process of the original Gaussian mixture model IVA (GMM-IVA) algorithm and using the corresponding improved EM algorithm to estimate the separation matrix and signal model. The experimental results confirm the effectiveness of the method in random initialization and the advantages in separation accuracy and convergence speed. In 2019, Rafique et al.
[47] proposed a new IVA algorithm based on Student’s t-mixture model as a source before adapting to the statistical properties of different speech sources. At the same time, an efficient EM algorithm is derived which estimates the location parameters of the source prior matrix and the decomposition matrix together, thereby improving the separation performance of the IVA algorithm. In 2020, Tang et al.
[48] proposed a complex generalized Gaussian mixture distribution with weighted variance to capture the non-Gaussian and nonstationary properties of speech signals to flexibly characterize real speech signals. At the same time, the optimization rules based on the EM method are used to estimate and update the mixing parameters.
6. BCD Method
Coordinate descent (CD) is a nongradient optimization algorithm. The algorithm does not need to calculate the gradient of the objective function and performs a linear search along a single dimension at a time. When a minimum value of the current dimension is obtained, different dimension directions are used repeatedly, and the optimal solution is finally converged. However, this algorithm is only suitable for smooth functions. When nonsmooth functions are used, they may fall into a nonstagnant point and fail to converge. In 2015, Wright
[49] proposed block coordinate descent (BCD), a generalization of the coordinate descent algorithm. It decomposes the original problem into multiple subproblems by simultaneously optimizing a subset of variables. The order of updates during the descent can be deterministic or random. This algorithm is mainly used to solve the nonconvex function, of which the objective function’s global optimal value is difficult to obtain.
Among them, the BCD algorithm has developed two methods with closed update formula for the BSS IVA algorithm’s
[50] IP and ISS methods.
6.1. Iterative Projection
The IVA algorithm based on iterative projection was first introduced in the AuxIVA
[33] algorithm.
This update rule is derived by solving a quadratic system of equations obtained by differentiating the cost function concerning the separation vector. In 2004, Dégerine et al.
[51] also proposed a similar scheme in the context of semiblind Gaussian source components. In 2016, Kitamura et al.
[52] used the IP algorithm in a BSS algorithm combining IVA and NMF, which provided good convergence speed and separation effect. In 2018, Yatabe et al.
[53] proposed an alternative to the AuxIVA-IP algorithm based on proximal splitting. In 2021, Nakashima et al.
[54] optimized it based on IP and extended each row vector of the separation matrix to update one by one to two rows of the separation matrix per update, resulting in a faster IP-2.
In 2020, Ikeshita et al.
[55] deduced IP-1 and IP-2 and used these two update rules to accelerate the OverIVA algorithm, forming the OverIVA-IP and OverIVA-IP2 update rules. In 2021, Scheibler
[56] proposed an iterative projection with adjustment (IPA) and a Newton conjugate gradient (NCG) to solve the hybrid exact-approximate diagonalization (HEAD) problem. IPA adopts a multiplicative update form, that is, the current separation matrix is multiplied by the rank 2 perturbation of the identity matrix. This method performs joint updates to the unmixing filters and additional rank-one updates to the remainder of the unmixing matrix. Simply put, the IPA optimization rule is a combination of IP and ISS methods. Updating one row and one column of the matrix in each update, performing IP- and ISS-style updates jointly, outperforms the IP and ISS methods.
6.2. Iterative Source Steering
ISS
[57] is an alternative to IP. Although IP has the advantages of good performance and fast convergence speed, in the iterative update process, it needs to recalculate a covariance matrix and invert for each source and each iteration. This greatly increases the overall complexity of the algorithm. The complexity of the algorithm is three times the number of microphones used. In addition to that, inverting a matrix is an inherently dangerous operation that can lead to unstable convergence when iterating. On this basis, the proposed ISS algorithm can effectively reduce the computational cost and complexity brought by the IP algorithm. ISS can also minimize the same cost function as the AuxIVA algorithm.
This update rule, which does not require matrix inversion, is used in a new method for joint deredundancy and BSS
[58]. This is a method based on an ILRMA framework, which combines the advantages of no inversion and low complexity of the ISS algorithm to achieve efficient BSS. In 2021, Du et al.
[59] proposed a computationally efficient optimization algorithm for BSS of overdetermined mixtures, an improved ISS algorithm for OverIVA algorithm, namely OverIVA-ISS. The algorithm combines the technology in OverIVA-IP with the technology in AuxIVA-ISS, which is more computationally efficient than the OverIVA-IP algorithm and can guarantee convergence. Additionally, the computational complexity is reduced from
O(M2) to
O(MN).
The overall performance of the ISS algorithm is better than the IP algorithm but inferior to the IP-2 algorithm. Therefore, an ISS-2 algorithm is proposed. In 2022, Ikeshita et al.
[60] extended the ISS algorithm to ISS-2.
At the same time, the advantage of the smaller time complexity of the ISS algorithm is maintained, and the separation performance is comparable to IP-2.
7. EVD Method
The EVD method is to find the most similar matrix to the original matrix. The optimization update rule based on EVD can be expressed as:
and
where
λM and
uM denote the smallest eigenvalue and eigenvector, respectively.
The IVA algorithm based on the EVD update rule was proposed in
[61] for a fast independent vector extraction (FIVE) algorithm. By comparing with the OverIVA and AuxIVA algorithms experimentally, the proposed algorithm can obtain the optimal solution with only a few iterations and is far superior to other algorithms in terms of convergence performance. In 2021, Brendel et al.
[62] extended the update rule of eigenvalue decomposition to an IVA source extraction algorithm with SOI mechanism. The proposed update rule achieves fast convergence at lower computational cost and outperforms the IP update rule in performance.