独立向量分析算法的优化: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Contributor: , ,

随着大数据信息时代的到来,人工智能(AI)方法变得极具前景和吸引力。通过盲源分离(BSS)分解各种混合信号来提取有用的信号变得极其重要。BSS已被证明在多声道音频处理中具有突出的应用。对于多通道语音信号,独立分量分析(ICA)要求源信号和其他条件具有一定的统计独立性,以允许盲目分离。独立矢量分析(IVA)是ICA的扩展,用于同时分离多个并行混合信号。IVA利用源信号分量之间的依赖关系,解决了独立分量分析引起的排列模糊问题,在处理卷积盲信号分离问题中起着至关重要的作用。

  • blind source separation (BSS)
  • independent vector analysis (IVA)
  • optimization update rule

1. 简介

随着大数据信息时代的到来,人们获取信息的方式越来越丰富。然而,研究人员通常只获得从接收器收集的混合信息,并且需要从潜在信号中分离或提取整个混合信息。接下来的问题是如何有效地从接收信号中获取有用的信号,这导致了与盲源分离(BSS)相关的技术[1]。
BSS的理论可以追溯到鸡尾酒会问题,这个问题几十年来一直备受关注。鸡尾酒会的问题是,当你在鸡尾酒会上,有各种各样的人在周围聊天,但你只能专注于其中一个讨论,或者专注于其中一个人的谈话。BSS理论是指观察不同声源的混合信号,并利用这些混合信号还原原始信号,源信号及其混合信号的先验信息极少。近年来,BSS在通信、语音和医疗信号处理方面的大量应用受到广泛关注[2]。通过盲特性实现盲估计、盲均衡和自适应信号处理具有重要意义。
独立成分分析[345](ICA)是首次提出的处理BSS的最重要方法之一。这是一种基于源信号统计独立性的经典BSS技术,是BSS的主流技术。ICA 要求源信号在统计上彼此独立。它是一种基于非高斯最大化的无监督、数据驱动的信号处理技术,用于在时域中分离时不变的混合信号。
但是,考虑到在真实场景中,信号通常以卷积的形式与混响混合。但是,ICA无法分离卷积混合的常见形式。而且卷积混合信号在时域处理,计算复杂度高,计算量大,收敛速度慢,大大降低了分离性能。利用卷积混合的特性:时域卷积等于频域中的乘积,提出了一种频域ICA [67](FD-ICA)算法。整个卷积混合信号从时域转换为频域,通过短时傅里叶变换(STFT)进行分离。与时域卷积运算相比,频域积运算具有计算方便、计算复杂度小、收敛速度快等优点。
针对ICA的上述问题,该文提出独立向量分析(IVA)[8,9]算法。它通过利用数据集之间的统计依赖关系将ICA推广到多个数据集,解决了信号分离输出中的一些不确定性。该方法在学习过程中保持每个源向量的相关性,同时最小化不同源向量之间的相关性。

2. 梯度下降

GD [21] 是最原始的优化算法之一。梯度下降是一种通过在与目标函数 I 的梯度相反的方向上更新模型参数来最小化 I 的方法。学习率η决定了选择达到局部最小值的步长的大小,换句话说,沿着目标函数产生的表面斜率下降的山坡,直到到达谷。这是一种通过最小化得到的分离方法,一个简单的GD方法外推如下:
Δ W ( k ) = I W ( k )
它的主要变体是批量梯度(BG),随机梯度(SG)和自然梯度(NG)。其中,NG算法[2223]是解决BSS问题的一种有效且最常用的算法之一。主要思想是将目标函数I的NG方向作为迭代方向,使算法能够快速收敛,从而实现源信号的分离。此外,证明最佳下降方向不是“负”规则梯度方向,而是“负”黎曼梯度。它最早是在[2425]中提出的,其主要思想是将缩放矩阵Qk相乘,以修改原始GD方法中的梯度,以获得更快的收敛速度。作为等式:
Δ W ( k ) = I W ( k ) Q ( k )
分离矩阵的更新为:
W ( k ) W ( k ) + η Δ W ( k )
传统的GD算法及其变体都离不开求解目标函数时步长η的选择。步长的选择将直接影响收敛速度和精度。为了加快算法的收敛速度,许多学者还对经典的NG算法进行了优化和改进。2011年,Liang等人[26]提出了一种考虑步长以获得快速稳定收敛的控制机制。2011年,Zhang等人[27]提出了一种NG盲分离算法,通过函数逼近直接估计得分函数,该算法使用一组正交多项式的线性组合来近似得分函数,其性能由均方误差来衡量。[28]中提出了一种改进的动量项方法,可以加快算法的收敛速度。
2018年,Fu等人[29]提出了一种基于步长自适应的IVA盲分离算法。该算法利用特征矩阵联合近似对角化算法初始化分离矩阵,自适应优化步长参数。即避免局部收敛,还可以显著提高算法的收敛速度,进一步提高分离性能。根据迭代步长与估计成本函数之间的关系变化。2012年,Wang等人[30]提出了一种基于最大块速度步长下降的可变步长IVA梯度算法。此外,根据迭代步长与待得到的分离矩阵变化之间的关系,提出一种基于估计函数的变步长IVA梯度算法。2010年,Kim[23]提出了一种具有非完全闭合约束的修正梯度和归一化IVA方法。梯度归一化提高了收敛速度,计算复杂度较低的非全息约束梯度表现出更好的性能,同时与其他方法相比具有更简单的结构。2018年,Koldovský等人[31]基于IVA算法的独立向量提取(IVE),提出了一种在复杂的非高斯场景中采用自适应步长方法的IVE算法,以加快收敛速度。

3. 快速定点法

快速不动点法是通过引入牛顿法推导而来的。基于快速定点[32]的迭代更新规则首次提出,以优化ICA的目标函数。它提供了一种非常简单的算法,该算法不依赖于任何定义的参数,并且可以快速收敛到数据允许的最准确的更新规则。
优化基于负熵的目标函数时,最简单的方法是使用 GD。虽然基于GD的方法具有良好的分离效果,但使用起来相对简单。该方法的整体收敛速度很慢,并且取决于学习速率序列的良好选择,即每次迭代的步长。尽管上一节总结了步长因子的各种优化,但GD方法依赖于合适的步长进行分离。
因此,在实际应用中,使整个收敛过程更快、更可靠非常重要。因此,为此,提出了一种快速定点迭代算法[33]来实现这一点。在定点算法中,整个计算以批处理或块模式执行,即在算法的一个步骤中使用大量数据点。快速定点算法具有非常吸引人的收敛特性,在实验中,它的收敛速度比常用的GD方法快得多。同时,在不需要快速实时适应的环境中,这种方法是自适应学习规则的良好替代方案。1997年,Hyvarinen[34]描述了一种更具启发式的推导。
2000年,Bingham等人[35]提出了一种能够分离复值线性混合源信号的FastICA算法。该方法在ICA算法中表现出良好的性能。与IVA算法相同的[36]广义快速定点方法,基于FastICA的思想开发,用于优化传统的IVA算法。在此方法下,更新表示为:
w n ( k ) E [ G ( k | y n ( k ) | 2 ) + | y n ( k ) | 2 G ( k | y n ( k ) | 2 ) ] w n ( k ) E [ ( y n ( k ) ) * G ( k | y n ( k ) | 2 ) x ( k ) ]
其中 E 表示期望,G 表示非线性函数,以及
G ( k | y n ( k ) | 2 ) = log g s n ( y n )
通过更新规则得到更新后的矩阵W后,需要进行去相关,保证正交性,如下所示:
W [ k ] W [ k ] W [ k ] H 1 / 2 W [ k ]
其中 (⋅)H 表示 的共轭转置。为了能够直接应用牛顿方法推导出复变量的快速算法,在复数符号中引入了二次泰勒多项式。使用这种形式的泰勒级数展开使推导更简单,并且对于直接将牛顿方法应用于复值变量的客观函数很有用。2000年,Yan等人[37]提供了一个独立的等价物。
最近,在 2021 年,Koldovský 等人 [38] 提出了一种基于 FastICA 和 FastIVA 静态混合算法的扩展快速动态独立矢量分析 (FastDIVA) 算法,用于从时变混合信号中盲目提取或分离一个或多个信号源。在允许所需源移动的逐源分离混合物模型中,混合物要么串联,要么并联。该算法继承了FastIVA的优点,在运动源分离方面表现出良好的性能,表现出优越的收敛速度和分离超高斯和亚高斯信号的能力。
2021 年,Amor 等人 [39] 使用 FastDIVA 对具有恒定分离载体 CSV 的混合物模型进行盲源提取。此外,它在嘈杂环境中的运动扬声器、运动大脑活动的提取和运动源三种环境中显示出新的潜力和良好的分离性能。2021年,Koldovský等人[40]提出了一种新的动态IVA算法。它基于一个混合模型,其中与兴趣源(SOI)相关的混合参数是时变的,分离参数是时不变的。采用牛顿-拉夫森方法在准似然法的基础上对目标函数进行优化,然后在不施加正交约束的情况下进行迭代更新,然后进行正交性。该算法是对快速定点算法的优化,在性能上优于梯度算法和辅助函数法。

4. 辅助功能

The update method based on the auxiliary function technology is also a method that does not include tuning parameters such as step size, which is an iterative algorithm with a convergence guarantee. This is a stable and fast update rule derived from the majorize-minimization principle [10,49]. Find its minimum by exploiting the convexity of the function. When the objective function f(θ)
is difficult to optimize, and the optimization algorithm used cannot directly find the optimal solution to the objective function, an easy-to-optimize objective function g(θ) can be found instead. Then, the substitution function is solved, and the optimal solution of g(θ) is close to the optimal solution of f(θ). In each iteration, a new surrogate function for the next iteration is reconstructed from the solution. Then, the new substitute function is optimized and solved to obtain the objective function of the next iteration. After several iterations, the optimal solution that is closer and closer to the original objective function that can be obtained. It was first proposed in the literature [41] to accelerate the convergence speed of the ICA algorithm. This rule consists of two optional updates:
  • The update of the weighted covariance matrix (that is, the auxiliary function variable).
  • The update of the separation matrix ensures that the objective function decreases monotonically at each update and finally achieves convergence.

Equation (12) is the auxiliary function variable update:

V n = E n [ U ( y n 2 ) y n 2 x n ( x n ) H ]

Among them, Vn denotes a covariance matrix of the observed signals, U() denotes a continuous and differentiable function of a real variable · satisfying, and U() usually takes the constant 1. 2 denotes the 2-norm of ·. Equation (13) is the update of the unmixing matrix:

w n ( k ) = [ W V n ] 1 e n e n T ( W n H V n 1 W n 1 ) e n

In 2011, Nobutaka Ono [42] used the auxiliary function technique in the objective function of the IVA algorithm and similarly derived an efficient update rule suitable for the IVA algorithm, called AuxIVA. In 2012, Nobutaka Ono [43] proposed an AuxIVA algorithm based on a generalized Gaussian source model or a Gaussian source model with time-varying variance. In 2012 and 2013, Nobutaka Ono [44,45] proposed a faster algorithm that can update two separation vectors simultaneously by solving the generalized eigenvalue problem for the AuxIVA algorithm with two sources and two microphones. Compared with the one-by-one update method, this method has faster convergence speed and better performance. This pairwise update method is also applicable to the pairwise separation of vectors in the case of three or more sources [46]. In 2014, Taniguchi et al. [47] used the AuxIVA algorithm based on the auxiliary function method for online real-time blind speech separation. In experimental comparisons with commonly used real-time IVA algorithms, the proposed online algorithm achieves a higher signal-to-noise ratio without environment-sensitive tuning parameters such as step factor.
In 2021, Brendel et al. [48] further optimized the IVA algorithm based on auxiliary functions under the same computational cost. The convergence speed of the AuxIVA algorithm is enhanced by three methods:
  • Turn the differential term into a tuning parameter via the differential term in the NG approximation algorithm.
  • Approximate the differential term as a matrix using the quasi-Newton method.
  • Use the square iteration method to speed it up.

5. EM Method

In signal processing, a common problem is estimating the parameters of a probability distribution function. The situation is more complicated in many parameter estimation problems because the data needed to estimate the parameters are not directly accessible, or some data are missing. EM-based optimization algorithms are well-suited for solving this class of problems because the EM algorithm produces maximum likelihood (ML) estimates of the parameters when there is a many-to-one mapping from the underlying distribution to the distribution of the control observations, while taking additive noise into account. The EM algorithm overcomes the problem of unanalyzable solutions and has been widely used in statistics, signal processing, and machine learning [50].
The EM algorithm is an iterative optimization method [51] that is used to estimate some unknown parameters given measurement data. The solution is divided into two steps.
E-step: First assign an initial distribution to each hidden variable empirically, that is, assume distribution parameters. Then, according to the parameters of the distribution, the expectation of the hidden variables in each data tuple can be obtained, that is, the classification operation is performed. The posteriors of the source signal can be obtained by
log q ( x 1 ( k ) , , x N ( k ) | s 1 ( k ) , , s N ( k ) ) log g ( y 1 ( k ) , , y N ( k ) | x 1 ( k ) , , x N ( k ) ) + ( log g ( x 1 ( k ) | s 1 ( k ) ) + + log g ( x N ( k ) | s N ( k ) ) ) + c o n s t .
where ∝ denotes it is proportional to the previous term, and q denotes posterior probability.
M-step: Calculate the maximum likelihood value of the distribution parameter (vector) based on the classification result, and then in turn recalculate the expectation of the hidden variable for each data tuple based on this maximum likelihood value. The update rules for mixing matrices A are 
A ( k ) = ( k < y ( k ) ( x ( k ) ) T > q ) ( k < x ( k ) ( x ( k ) ) T > q ) 1
where <>q denotes expectation over q.
Through the repetition of the above two steps, when the expectation of the hidden variable and the maximum likelihood value of the parameter tends to be stable, the entire iteration is completed.
In 2004 and 2008, Varadhan et al. [52,53] used the square iteration method in the EM algorithm to accelerate its convergence speed. In 2008, Lee et al. [54] deduced the expectation-maximization algorithm, and the algorithm was used in the updated iteration of the IVA algorithm. The EM algorithm could estimate the parameters of the separation matrix and the unknown source at the same time, showing a good separation performance. In 2010, Hao et al. [55] proposed a unified probabilistic framework for the IVA algorithm with the Gaussian mixture model as the source prior model; this flexible prior source enables the IVA algorithm to separate different types of signals, deduce different EM algorithms, and test three models: noiseless IVA, online IVA, and noise IVA. The EM algorithm can effectively estimate the unmixing matrix without sensor noise. In online IVA, an online EM algorithm is derived to track the motion of the source under nonstationary conditions. Noise IVA includes sensor noise and denoising combined with separation. An EM algorithm suitable for this model is proposed which can effectively estimate the model parameters and separate the source signal at the same time.
In 2019, Gu et al. [56] proposed a Gaussian mixture model IVA algorithm with time-varying parameters to accommodate temporal power fluctuations embedded in nonstationary speech signals, thus avoiding the pretraining process of the original Gaussian mixture model IVA (GMM-IVA) algorithm and using the corresponding improved EM algorithm to estimate the separation matrix and signal model. The experimental results confirm the effectiveness of the method in random initialization and the advantages in separation accuracy and convergence speed. In 2019, Rafique et al. [57] proposed a new IVA algorithm based on Student’s t-mixture model as a source before adapting to the statistical properties of different speech sources. At the same time, an efficient EM algorithm is derived which estimates the location parameters of the source prior matrix and the decomposition matrix together, thereby improving the separation performance of the IVA algorithm. In 2020, Tang et al. [58] proposed a complex generalized Gaussian mixture distribution with weighted variance to capture the non-Gaussian and nonstationary properties of speech signals to flexibly characterize real speech signals. At the same time, the optimization rules based on the EM method are used to estimate and update the mixing parameters.

6. BCD Method

Coordinate descent (CD) is a nongradient optimization algorithm. The algorithm does not need to calculate the gradient of the objective function and performs a linear search along a single dimension at a time. When a minimum value of the current dimension is obtained, different dimension directions are used repeatedly, and the optimal solution is finally converged. However, this algorithm is only suitable for smooth functions. When nonsmooth functions are used, they may fall into a nonstagnant point and fail to converge. In 2015, Wright [59] proposed block coordinate descent (BCD), a generalization of the coordinate descent algorithm. It decomposes the original problem into multiple subproblems by simultaneously optimizing a subset of variables. The order of updates during the descent can be deterministic or random. This algorithm is mainly used to solve the nonconvex function, of which the objective function’s global optimal value is difficult to obtain.
其中,BCD算法针对BSS IVA算法的[60]IP和ISS方法开发了两种具有封闭更新公式的方法。

6.1. 迭代投影

基于迭代投影的IVA算法最早是在AuxIVA [42]算法中引入的。
该更新规则是通过求解通过微分分离向量的成本函数而获得的二次方程组得出的。2004年,Dégerine等人[61]也在半盲高斯源分量的背景下提出了类似的方案。2016年,Kitamura等人[62]在结合IVA和NMF的BSS算法中使用了IP算法,提供了良好的收敛速度和分离效果。2018年,Yatabe等人[63]提出了一种基于近端分裂的AuxIVA-IP算法的替代方案。2021 年,Nakashima 等人 [64] 基于 IP 对其进行了优化,并将分离矩阵的每一行向量扩展为每次更新一行到两行分离矩阵,从而获得更快的 IP-2。
2020年,池下等[65]推导出IP-1和IP-2,并利用这两个更新规则加速OverIVA算法,形成了OverIVA-IP和OverIVA-IP2更新规则。2021 年,Scheibler [66] 提出了带调整的迭代投影 (IPA) 和牛顿共轭梯度 (NCG) 来解决混合精确近似对角化 (HEAD) 问题。IPA采用乘法更新形式,即将当前分离矩阵乘以单位矩阵的秩2扰动。此方法对解混过滤器执行联合更新,并对解混矩阵的其余部分执行其他排名一更新。简单地说,IPA优化规则是IP和ISS方法的组合。在每次更新中更新矩阵的一行和一列,同时执行 IP 和 ISS 样式的更新,优于 IP 和 ISS 方法。

6.2. 迭代源控制

ISS [67] 是 IP 的替代品。虽然IP具有性能好、收敛速度快等优点,但在迭代更新过程中,需要重新计算协方差矩阵,并针对每个源和每次迭代进行反转。这大大增加了算法的整体复杂性。该算法的复杂性是所用麦克风数量的三倍。除此之外,反转矩阵本质上是一种危险的操作,可能导致迭代时收敛不稳定。在此基础上,所提出的ISS算法可以有效降低IP算法带来的计算成本和复杂度。ISS还可以最小化与AuxIVA算法相同的成本函数。
W ( k ) W ( k ) v n ( k ) ( w n ( k ) ) H
此更新规则不需要矩阵反演,用于联合冗余和BSS的新方法[68]。该方法基于ILRMA框架,结合了ISS算法无反演、复杂度低等优点,实现了高效的BSS。2021年,Du等人[69]提出了一种计算高效的超定混合物BSS优化算法,一种用于OverIVA算法的改进ISS算法,即OverIVA-ISS。该算法将OverIVA-IP中的技术与AuxIVA-ISS中的技术相结合,比OverIVA-IP算法计算效率更高,可以保证收敛性。此外,计算复杂度从OM 2降低到OMN)。
ISS算法的整体性能优于IP算法,但不如IP-2算法。因此,提出了一种ISS-2算法。2022 年,池下等人 [70] 将 ISS 算法扩展到 ISS-2。
同时,保持了ISS算法时间复杂度较小的优势,分离性能可与IP-2相媲美。

7. 埃博拉病毒病方法

埃博拉病毒病方法是找到与原始基质最相似的矩阵。基于埃博拉病毒病的优化更新规则可以表示为:
w [ k ] w [ k ] w [ k ] 2
w ( k ) = 1 λ M ( k ) u M ( k )
其中 λ M 和 uM 分别表示最小特征值和特征向量。
[11]中提出了基于EVD更新规则的IVA算法,用于快速独立载体提取(FIVE)算法。通过实验与OverIVA和AuxIVA算法的实验比较,所提算法只需几次迭代即可获得最优解,在收敛性能上远优于其他算法。2021年,Brendel等人[71]将特征值分解的更新规则扩展到具有SOI机制的IVA源提取算法。该更新规则以较低的计算成本实现了快速收敛,在性能上优于IP更新规则。

This entry is adapted from the peer-reviewed paper 10.3390/s23010493

This entry is offline, you can click here to edit this entry!
ScholarVision Creations