You're using an outdated browser. Please upgrade to a modern browser for the best experience.
Convolutional Sparse Coding
Edit

The convolutional sparse coding paradigm is an extension of the global Sparse Coding model, in which a redundant dictionary is modeled as a concatenation of circulant matrices. While the global sparsity constraint describes signal xRN as a linear combination of a few atoms in the redundant dictionary DRN×M,MN, usually expressed as x=DΓ for a sparse vector ΓRM, the alternative dictionary structure adopted by the Convolutional Sparse Coding model allows the sparsity prior to be applied locally instead of globally: independent patches of x are generated by "local" dictionaries operating over stripes of Γ. The local sparsity constraint allows stronger uniqueness and stability conditions than the global sparsity prior, and has shown to be a versatile tool for inverse problems in fields such as Image Understanding and Computer Vision. Also, a recently proposed multi-layer extension of the model has shown conceptual benefits for more complex signal decompositions, as well as a tight connection the Convolutional Neural Networks model, allowing a deeper understanding of how the latter operates.

uniqueness and stability multi-layer complex signal

1. Overview

Given a signal of interest xRN and a redundant dictionary DRN×M,MN, the Sparse Coding problem consist of retrieving a sparse vector ΓRM, denominated the Sparse representation of x, such that x=DΓ. Intuitively, this implies x is expressed as a linear combination of a small number of elements in D. The global sparsity constraint prior has been shown to be useful in many ill-posed inverse problems such as image inpainting, super-resolution, and coding.[1][2][3] It has been of particular interest for image understanding and computer vision tasks involving natural images, allowing redundant dictionaries to be efficiently inferred [4][5][6]

As an extension to the global sparsity constraint, recent pieces in the literature have revisited the model to reach a more profound understanding of its uniqueness and stability conditions.[6] Interestingly, by imposing a local sparsity prior in Γ, meaning that its independent patches can be interpreted as sparse vectors themselves, the structure in D can be understood as a “local" dictionary operating over each independent patch. This model extension is denominated Convolutional Sparse Coding (CSC) and drastically reduces the burden of estimating signal representations while being characterized by stronger uniqueness and stability conditions. Furthermore, it allows for Γ to be efficiently estimated via projected gradient descent algorithms such as Orthonormal Matching Pursuit and Basis Pursuit, while performing in a local fashion[5]

Besides its versatility in inverse problems, recent efforts have focused on the multi-layer version of the model and provided evidence of its reliability for recovering multiple underlying representations.[7] Moreover, a tight connection between such a model and the well-established Convolutional Neural Network model (CNN) was revealed, providing a new tool for a more rigurous understanding of its theoretical conditions.

The convolutional sparse coding model provides a very efficient set of tools to solve a wide range of inverse problems, including image denoising, image inpainting, and image superresolution. By imposing local sparsity constraints, it allows to efficiently tackle the global coding problem by iteratively estimating disjoint patches and assembling them into a global signal. Furthermore, by adopting a multi-layer sparse model, which results from imposing the sparsity constraint to the signal inherent representations themselves, the resulting “Layered" Pursuit algorithm keeps the strong uniqueness and stability conditions from the single-layer model. This extension also provides some interesting notions about the relation between its sparsity prior and the forward pass of the Convolutional Neural Network, which allows to understand how the theoretical benefits of the CSC model can provide a strong mathematical meaning of the CNN structure.

2. Sparse Coding Paradigm

Basic concepts and models are presented to explain into detail the Convolutional Sparse Representation framework. On the grounds that the sparsity constraint has been proposed under different models, a short description of them is presented to show its evolution up to the model of interest. Also included are the concepts of Mutual Coherence and Restricted Isometry Property to establish uniqueness stability guarantees.

2.1. Global Sparse Coding Model

Allow signal xRN to be expressed as a linear combination of a small number of atoms from a given dictionary DRN×M,M>N. Alternatively, the signal can be expressed as x=DΓ, where ΓRM corresponds to the sparse representation of x, which selects the atoms to combine and their weights. Subsequently, given D, the task of recovering Γ from either the noise-free signal itself or an observation is denominated sparse coding. Considering the noise-free scenario, the coding problem is formulated as follows: Unknown environment 'aligned' The effect of the 0 norm is to favor solutions with as much zero elements as possible. Furthermore, given an observation affected by bounded energy noise: Y=DΓ+E,E2<ε, the pursuit problem is reformulated as: Unknown environment 'aligned'

2.2. Stability and Uniqueness Guarantees for the Global Sparse Model

Let the spark of D be defined as the minimum number of linearly independent columns: Unknown environment 'aligned'

Then, from the triangular inequality, the sparsest vector Γ satisfies: Γ0<σ(D)2. Although the spark provides an upper bound, it is unfeasible to compute in practical scenarios. Instead, let the mutual coherence be a measure of similarity between atoms in D. Assuming 2-norm unit atoms, the mutual coherence of D is defined as: μ(D)=maxijdiTdj2, where di are atoms. Based on this metric, it can be proven that the true sparse representation Γ can be recovered if and only if Γ0<12(1+1μ(D)).

Similarly, under the presence of noise, an upper bound for the distance between the true sparse representation Γ and its estimation Γ^ can be established via the Restricted Isometry Property (RIP). A k-RIP matrix D with constant δk corresponds to: (1δk)Γ22DΓ22(1+δk)Γ22, where δk is the smallest number that satisfies the inequality for every Γ0=k. Then, assuming Γ0<12(1+1μ(D)), it is guaranteed that Γ^Γ224ε21μ(D)(2Γ01).

Solving such a general pursuit problem is a hard task if no structure is imposed on dictionary D. This implies learning large, highly overcomplete representations, which is extremely expensive. Assuming such a burden has been met and a representative dictionary has been obtained for a given signal x, typically based on prior information, Γ can be estimated via several pursuit algorithms.

Pursuit algorithms for the global sparse model

Two basic methods for solving the Global Sparse Coding problem are Orthogonal Matching Pursuit (OMP) and Basis Pursuit (BP) . OMP is a greedy algorithm that iteratively selects the atom best correlated with the residual between x and a current estimation, followed by a projection onto a subset of pre-selected atoms. On the other hand, BP is a more sophisticated approach that replaces the original coding problem by a linear programming problem. Based on this algorithms, the Global Sparse Coding provides considerably loose bounds for the uniqueness and stability of Γ^. To overcome this, additional priors are imposed over D to guarantee tighter bounds and uniqueness conditions. The reader is referred to (,[5] section 2) for details regarding this properties.

2.3. Convolutional Sparse Coding Model

A local prior is adopted such that each overlapping section of Γ is sparse. Let DRN×Nm be constructed from shifted versions of a local dictionary DLRn×m,mM. Then, x is formed by products between DL and local patches of ΓRmN.

The global dictionary is expressed in terms of a stride convolutional matrix. So, signals can be generated in terms of stripes of the sparse representation multiplies by a shift-invariant local dictionary. https://handwiki.org/wiki/index.php?curid=2065104

From the latter, Γ can be re-expressed by N disjoint sparse vectors αiRm: Γ{α1,α2,,αN}T. Similarly, let γ be a set of (2n1) consecutive vectors αi. Then, each disjoint segment in x is expressed as: xi=RiDΓ, where operator RiRn×N extracts overlapping patches of size n starting at index i. Thus, RiD contains only (2n1)m nonzero columns. Hence, by introducing operator SiR(2n1)m×Nm which exclusively preserves them: Unknown environment 'aligned' where Ω is known as the stripe dictionary, which is independent of i, and γi is denominated the i-th stripe. So, x corresponds to a patch aggregation or convolutional interpretation: Unknown environment 'aligned' Where di corresponds to the i-th atom from the local dictionary DL and zi is constructed by elements of patches α: Undefined control sequence \triangleq. Given the new dictionary structure, let the 0, pseudo-norm be defined as: Undefined control sequence \triangleq. Then, for the noise-free and noise-corrupted scenarios, the problem can be respectively reformulated as: Unknown environment 'aligned'

Stability and uniqueness guarantees for the convolutional sparse model

For the local approach, D mutual coherence satisfies: μ(D)(m1m(2n1)1)1/2. So, if a solution obeys Γ0,<12(1+1μ(D)), then it is the sparsest solution to the 0, problem. Thus, under the local formulation, the same number of non-zeros is permitted for each stripe instead of the full vector!

Similar to the global model, the CSC is solved via OMP and BP methods, the latter contemplating the use of the Iterative Shrinkage Thresholding Algorithm (ISTA)[8] for splitting the pursuit into smaller problems. Based on the 0, pseudonorm, if a solution Γ exists satisfying Γ0,<12(1+1μ(D)), then both methods are guaranteed to recover it. Moreover, the local model guarantees recovery independently of the signal dimension, as opposed to the 0 prior. Stability conditions for OMP and BP are also guaranteed if its Exact Recovery Condition (ERC) is met for a support T with a constant θ. The ERC is defined as: θ=1maxiTDTdi1>0, where denotes the Pseudo-inverse. Algorithm 1 shows the Global Pursuit method based on ISTA.

Algorithm 1: 1D CSC via local iterative soft-thresholding.

Input:

DL: Local Dictionary,

y: observation,

λ: Regularization parameter,

c: step size for ISTA,

tol: tolerance factor,

maxiters: maximum number of iterations.

Undefined control sequence \boldsymbol (Initialize disjoint patches.)
{ri}(0){Riy} (Initialize residual patches.)
k0

Repeat

Undefined control sequence \boldsymbol (Coding along disjoint patches)
Undefined control sequence \boldsymbol Undefined control sequence \boldsymbol (Patch Aggregation)
{ri}(k)Ri(yx^(k)) (Update residuals)
kk+1

Until x^(k)x^(k1)2< tol or k> maxiters.

2.4. Multi-Layered Convolutional Sparse Coding Model

By imposing the sparsity prior in the inherent structure of x, strong conditions for a unique representation and feasible methods for estimating it are granted. Similarly, such a constraint can be applied to its representation itself, generating a cascade of sparse representations: Each code is defined by a few atoms of a given set of convolutional dictionaries.

Based on this criteria, yet another extension denominated Multi-layer Convolutional Sparse Coding (ML-CSC) is proposed. A set of analytical dictionaries {D}k=1K can be efficiently designed, where sparse representations at each layer {Γ}k=1K are guaranteed by imposing the sparsity prior over the dictionaries themselves.[7] In other words, by considering dictionaries to be stride convolutional matrices i.e. atoms of the local dictionaries shift m elements instead of a single one, where m corresponds to the number of channels in the previous layer, it is guaranteed that the Γ0, norm of the representations along layers is bounded.

For example, given the dictionaries D1RN×Nm1,D2RNm1×Nm2, the signal is modeled as D1Γ1=D1(D2Γ2), where Γ1 is its sparse code, and Γ2 is the sparse code of Γ1. Then, the estimation of each representation is formulated as an optimization problem for both noise-free and noise-corrupted scenarios, respectively. Assuming Γ0=x: Unknown environment 'aligned'

In what follows, theoretical guarantees for the uniqueness and stability of this extended model are described.

Theorem 1: (Uniqueness of Sparse Representations) Consider signal x satisfies the (ML-CSC) model for a set of convolutional dictionaries {Di}i=1K with mutual coherence {μ(Di)}i=1K. If the true sparse representations satisfy {Γ}i=1K<12(1+1μ(Di)), then a solution to the problem {Γi^}i=1K will be its unique solution if the thresholds are chosen to satisfy: λi<12(1+1μ(Di)).

Theorem 2: (Global Stability of the noise-corrupted scenario) Consider signal x satisfies the (ML-CSC) model for a set of convolutional dictionaries {Di}i=1K is contaminated with noise E, where E2ε0. resulting in Y=X+E. If λi<12(1+1μ(Di)) and εi2=4εi121(2Γi0,1)μ(Di), then the estimated representations {Γi}i=1K satisfy the following: ΓiΓ^i22εi2.

2.5. Projection-Based Algorithms

As a simple approach for solving the ML-CSC problem, either via the 0 or 1 norms, is by computing inner products between x and the dictionary atoms to identify the most representatives ones. Such a projection is described as: Unknown environment 'aligned'

which have closed-form solutions via the hard-thresholding Hβ(DTx) and soft-thresholding algorithms Sβ(DTx), respectively. If a nonnegative constraint is also contemplated, the problem can be expressed via the 1 norm as: Unknown environment 'aligned' which closed-form solution corresponds to the soft nonnegative thresholding operator Sβ+(DTx), where Undefined control sequence \triangleq. Guarantees for the Layered soft-thresholding approach are included in the Appendix (Section 6.2).

Theorem 3: (Stable recovery of the multi-layered soft-thresholding algorithm) Consider signal x that satisfies the (ML-CSC) model for a set of convolutional dictionaries {Di}i=1K with mutual coherence {μ(Di)}i=1K is contaminated with noise E, where E2ε0. resulting in Y=X+E. Denote by |Γimin| and |Γimax| the lowest and highest entries in Γi. Let {Γ^i}i=1K be the estimated sparse representations obtained for {βi}i=1K. If Γi0,<12(1+1μ(Di)|Γimin||Γimin|)1μ(Di)εi1|Γimax| and βi is chosen according to: Unknown environment 'aligned' Then, Γ^i has the same support as Γi, and ΓiΓi^2,εi, for εi=Γi0,(εi1+μ(Di)(Γi0,1)|Γimax|+βi)

2.6. Connections to Convolutional Neural Networks

Recall the Forward Pass of the Convolutional Neural Network (CNN) model, used in both training and inference steps. Let xRMm1 be its input and WkRN×m1 the filters at layer k, which are followed by the Rectified Linear Unit ReLU(x)=max(0,x), for bias bRMm1. Based on this elementary block, taking K=2 as example, the CNN output can be expressed as: Unknown environment 'aligned' Finally, comparing the CNN algorithm and the Layered thresholding approach for the nonnegative constaint [sic?], it is straightforward to show that both are equivalent: Unknown environment 'aligned'

Convolutional Layers from the Forward Pass Algorithm. https://handwiki.org/wiki/index.php?curid=2057193
Contrast between the rectified linear unit function and the nonnegative soft thresholding pointwise nonlinearities. https://handwiki.org/wiki/index.php?curid=2082172

As explained in what follows, this naive approach of solving the coding problem is a particular case of a more stable projected gradient descent algorithm for the ML-CSC model. Equipped with the stability conditions of both approaches, a more clear understanding about the class of signals a CNN can recover, under what noise conditions can an estimation be accurately attained, and how can its structure be modified to improve its theoretical conditions. The reader is referred to (,[7] section 5) for details regarding their connection.

2.7. Pursuit Algorithms for the Multi-Layer CSC Model

A crucial limitation of the Forward Pass is it being unable to recover the unique solution of the DCP problem, which existence has been demonstrated. So, instead of using a thresholding approach at each layer, a full pursuit method is adopted, denominated Layered Basis Pursuit (LBP). Considering the projection onto the 1 ball, the following problem is proposed: Unknown environment 'aligned' where each layer is solved as an independent CSC problem, and ξi is proportional to the noise level at each layer. Among the methods for solving the layered coding problem, ISTA is an efficient decoupling alternative. In what follows, a short summary of the guarantees for the LBP are established.

Theorem 4: (Recovery guarantee) Consider a signal x characterized by a set of sparse vectors {Γi}i=1K, convolutional dictionaries {Di}i=1K and their corresponding mutual coherences {μ(Di)}i=1K. If Γi0,<12(1+1μ(Di)), then the LBP algorithm is guaranteed to recover the sparse representations.

Theorem 5: (Stability in the presence of noise) Consider the contaminated signal Y=X+E, where E0,ε0 and x is characterized by a set of sparse vectors {Γi}i=1K and convolutional dictionaries {Di}i=1K. Let {Γ^i}i=1K be solutions obtained via the LBP algorithm with parameters {ξ}i=1K. If Γi0,<13(1+1μ(Di)) and ξi=4εi1, then: (i) The support of the solution Γ^i is contained in that of Γi, (ii) Missing argument for \mathbf, and (iii) Any entry greater in absolute value than εiΓi0 is guaranteed to be recovered.

3. Applications of the Convolutional Sparse Coding Model: Image Inpainting

As a practical example, an efficient image inpainting method for color images via the CSC model is shown.[6] Consider the three-channel dictionary DRN×M×3, where dc,m denotes the m-th atom at channel c, represents signal x by a single cross-channel sparse representation Γ, with stripes denoted as zi. Given an observation y={yr,yg,yb}, where randomly chosen channels at unknown pixel locations are fixed to zero, in a similar way to impulse noise, the problem is formulated as: Unknown environment 'aligned' By means of ADMM,[9] the cost function is decoupled into simpler sub-problems, allowing an efficient Γ estimation. Algorithm 2 describes the procedure, where D^c,m is the DFT representation of Dc,m, the convolutional matrix for the term dc,izi. Likewise, x^m and z^m correspond to the DFT representations of xm and zm, respectively, Sβ(.) corresponds to the Soft-thresholding function with argument β, and the 1,2 norm is defined as the 2 norm along the channel dimension c followed by the 1 norm along the spatial dimension m. The reader is referred to (,[6] Section II) for details on the ADMM implementation and the dictionary learning procedure.

Algorithm 2: Color image inpainting via the convolutional sparse coding model.

Input:

D^c,m: DFT of convolutional matrices Dc,m,

y={yr,yg,yb}: Color observation,

λ: Regularization parameter,

{μ,ρ}: step sizes for ADMM,

tol: tolerance factor,

maxiters: maximum number of iterations.

kk+1

Repeat

{z^m}(k+1)argmin{x^m}12cmD^c,mz^my^c+ρ2mz^m(y^m+u^m(k))22.
{yc,m}(k+1)argmin{yc,m}λcmyc,m1+μ{xc,m(k+1)}2,1+ρ2mzm(k+1)(y+um(k))22.
ym(k+1)=Sλ/ρ(xm(k+1)+um(k)).
kk+1

Until {zm}(k+1){zm}(k)2<tol or i> maxiters.

References

  1. Jianchao Yang; Wright, John; Huang, Thomas S; Yi Ma (November 2010). "Image Super-Resolution Via Sparse Representation". IEEE Transactions on Image Processing 19 (11): 2861–2873. doi:10.1109/TIP.2010.2050625. PMID 20483687. Bibcode: 2010ITIP...19.2861Y.  https://dx.doi.org/10.1109%2FTIP.2010.2050625
  2. Wetzstein, Gordon; Heidrich, Wolfgang; Heide, Felix (2015). Fast and Flexible Convolutional Sparse Coding. pp. 5135–5143. https://www.cv-foundation.org/openaccess/content_cvpr_2015/html/Heide_Fast_and_Flexible_2015_CVPR_paper.html. ;
  3. Wohlberg, Brendt (2017). "SPORCO: A Python package for standard and convolutional sparse representations". Proceedings of the 16th Python in Science Conference: 1–8. doi:10.25080/shinma-7f4c6e7-001. http://conference.scipy.org/proceedings/scipy2017/brendt_wohlberg.html. ;
  4. Mairal, Julien; Bach, Francis; Ponce, Jean; Sapiro, Guillermo (2009). "Online Dictionary Learning for Sparse Coding". Proceedings of the 26th Annual International Conference on Machine Learning (ACM): 689–696. doi:10.1145/1553374.1553463. ISBN 9781605585161. https://dl.acm.org/citation.cfm?id=1553463. ;
  5. Papyan, Vardan; Sulam, Jeremias; Elad, Michael (1 November 2017). "Working Locally Thinking Globally: Theoretical Guarantees for Convolutional Sparse Coding". IEEE Transactions on Signal Processing 65 (21): 5687–5701. doi:10.1109/TSP.2017.2733447. Bibcode: 2017ITSP...65.5687P.  https://dx.doi.org/10.1109%2FTSP.2017.2733447
  6. Wohlberg, Brendt (6–8 March 2016). "Convolutional sparse representation of color images". 2016 IEEE Southwest Symposium on Image Analysis and Interpretation (SSIAI): 57–60. doi:10.1109/SSIAI.2016.7459174. ISBN 978-1-4673-9919-7.  https://dx.doi.org/10.1109%2FSSIAI.2016.7459174
  7. Papyan, Vardan; Romano, Yaniv; Elad, Michael (2017). "Convolutional Neural Networks Analyzed via Convolutional Sparse Coding". J. Mach. Learn. Res. 18 (1): 2887–2938. ISSN 1532-4435. Bibcode: 2016arXiv160708194P. http://dl.acm.org/citation.cfm?id=3122009.3176827. ;
  8. Beck, Amir; Teboulle, Marc (January 2009). "A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems". SIAM Journal on Imaging Sciences 2 (1): 183–202. doi:10.1137/080716542.  https://dx.doi.org/10.1137%2F080716542
  9. Boyd, Stephen (2010). "Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers". Foundations and Trends in Machine Learning 3 (1): 1–122. doi:10.1561/2200000016. ISSN 1935-8237.  https://dx.doi.org/10.1561%2F2200000016
More
Information
Contributor MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register :
View Times: 1.3K
Entry Collection: HandWiki
Revision: 1 time (View History)
Update Date: 24 Nov 2022
Video Production Service