Submitted Successfully!
To reward your contribution, here is a gift for you: A free trial for our video production service.
Thank you for your contribution! You can also upload a video entry or images related to this topic.
Version Summary Created by Modification Content Size Created at Operation
1 handwiki -- 1564 2022-10-27 01:46:10

Video Upload Options

We provide professional Video Production Services to translate complex research into visually appealing presentations. Would you like to try it?

Confirm

Are you sure to Delete?
Cite
If you have any further questions, please contact Encyclopedia Editorial Office.
HandWiki. Gramian Matrix. Encyclopedia. Available online: https://encyclopedia.pub/entry/31485 (accessed on 16 November 2024).
HandWiki. Gramian Matrix. Encyclopedia. Available at: https://encyclopedia.pub/entry/31485. Accessed November 16, 2024.
HandWiki. "Gramian Matrix" Encyclopedia, https://encyclopedia.pub/entry/31485 (accessed November 16, 2024).
HandWiki. (2022, October 27). Gramian Matrix. In Encyclopedia. https://encyclopedia.pub/entry/31485
HandWiki. "Gramian Matrix." Encyclopedia. Web. 27 October, 2022.
Gramian Matrix
Edit

In linear algebra, the Gram matrix (or Gramian matrix, Gramian) of a set of vectors [math]\displaystyle{ v_1,\dots, v_n }[/math] in an inner product space is the Hermitian matrix of inner products, whose entries are given by [math]\displaystyle{ G_{ij}=\langle v_i, v_j \rangle }[/math]. An important application is to compute linear independence: a set of vectors are linearly independent if and only if the Gram determinant (the determinant of the Gram matrix) is non-zero. It is named after Jørgen Pedersen Gram.

inner product space g_{ij}\langle linearly

1. Examples

For finite-dimensional real vectors in [math]\displaystyle{ \mathbb{R}^n }[/math] with the usual Euclidean dot product, the Gram matrix is simply [math]\displaystyle{ G = V^\mathrm{T} V }[/math], where [math]\displaystyle{ V }[/math] is a matrix whose columns are the vectors [math]\displaystyle{ v_k }[/math]. For complex vectors in [math]\displaystyle{ \mathbb{C}^n }[/math], [math]\displaystyle{ G = V^H V }[/math], where [math]\displaystyle{ V^H }[/math] is the conjugate transpose of [math]\displaystyle{ V }[/math].

Given square-integrable functions [math]\displaystyle{ \{\ell_i(\cdot),\,i=1,\dots,n\} }[/math] on the interval [math]\displaystyle{ [t_0,t_f] }[/math], the Gram matrix [math]\displaystyle{ G=[G_{ij}] }[/math] is:

[math]\displaystyle{ G_{ij}=\int_{t_0}^{t_f} \ell_i(\tau)\bar{\ell_j}(\tau)\, d\tau. }[/math]

For any bilinear form [math]\displaystyle{ B }[/math] on a finite-dimensional vector space over any field we can define a Gram matrix [math]\displaystyle{ G }[/math] attached to a set of vectors [math]\displaystyle{ v_1,\dots, v_n }[/math] by [math]\displaystyle{ G_{ij} = B(v_i,v_j) }[/math]. The matrix will be symmetric if the bilinear form [math]\displaystyle{ B }[/math] is symmetric.

1.1. Applications

  • In Riemannian geometry, given an embedded [math]\displaystyle{ k }[/math]-dimensional Riemannian manifold [math]\displaystyle{ M\subset \mathbb{R}^n }[/math] and a coordinate chart [math]\displaystyle{ \phi:U\to M }[/math] for [math]\displaystyle{ (x_1,\ldots,x_k)\in U\subset\mathbb{R}^k }[/math], the volume form [math]\displaystyle{ \omega }[/math] on [math]\displaystyle{ M }[/math] induced by the embedding may be computed using the Gramian of the coordinate tangent vectors:
[math]\displaystyle{ \omega = \sqrt{\det G}\ dx_1\cdots dx_k,\quad G = \left[\left\langle \tfrac{\partial\phi}{\partial x_i},\tfrac{\partial\phi}{\partial x_j}\right\rangle\right]. }[/math]

This generalizes the classical surface integral of a parametrized surface [math]\displaystyle{ \phi:U\to S\subset \mathbb{R}^3 }[/math] for [math]\displaystyle{ (x,y)\in U\subset\mathbb{R}^2 }[/math]:

[math]\displaystyle{ \int_S f\ dA \ =\ \iint_U f(\phi(x,y))\, \left|\tfrac{\partial\phi}{\partial x}\,{\times}\,\tfrac{\partial\phi}{\partial y}\right|\, dx\, dy. }[/math]
  • If the vectors are centered random variables, the Gramian is approximately proportional to the covariance matrix, with the scaling determined by the number of elements in the vector.
  • In quantum chemistry, the Gram matrix of a set of basis vectors is the overlap matrix.
  • In control theory (or more generally systems theory), the controllability Gramian and observability Gramian determine properties of a linear system.
  • Gramian matrices arise in covariance structure model fitting (see e.g., Jamshidian and Bentler, 1993, Applied Psychological Measurement, Volume 18, pp. 79–94).
  • In the finite element method, the Gram matrix arises from approximating a function from a finite dimensional space; the Gram matrix entries are then the inner products of the basis functions of the finite dimensional subspace.
  • In machine learning, kernel functions are often represented as Gram matrices.[1]
  • Since the Gram matrix over the reals is a symmetric matrix, it is diagonalizable and its eigenvalues are non-negative. The diagonalization of the Gram matrix is the singular value decomposition.

2. Properties

2.1. Positive-Semidefiniteness

The Gram matrix is symmetric in the case the real product is real-valued; it is Hermitian in the general, complex case by definition of an inner product.

The Gram matrix is positive semidefinite, and every positive semidefinite matrix is the Gramian matrix for some set of vectors. The fact that the Gramian matrix is positive-semidefinite can be seen from the following simple derivation:

[math]\displaystyle{ x^\textsf{T} \mathbf{G} x = \sum_{i,j}x_i x_j\left\langle v_i,v_j \right\rangle = \sum_{i,j}\left\langle x_i v_i, x_j v_j \right\rangle = \left\langle \sum_i x_i v_i, \sum_j x_j v_j \right\rangle =\left\| \sum_i x_i v_i \right\|^2 \geq 0 . }[/math]

The first equality follows from the definition of matrix multiplication, the second and third from the bi-linearity of the inner-product, and the last from the positive definiteness of the inner product. Note that this also shows that the Gramian matrix is positive definite if and only if the vectors [math]\displaystyle{ v_i }[/math] are linearly independent (that is, [math]\displaystyle{ \textstyle\sum_i x_i v_i \neq 0 }[/math] for all [math]\displaystyle{ x }[/math]).[2]

2.2. Finding a Vector Realization

Given any positive semidefinite matrix [math]\displaystyle{ M }[/math], one can be decompose it as:

[math]\displaystyle{ M = B^* B }[/math],

where [math]\displaystyle{ B^* }[/math] is the conjugate transpose of [math]\displaystyle{ B }[/math] (or [math]\displaystyle{ M = B^\textsf{T} B }[/math] in the real case). Here [math]\displaystyle{ B }[/math] is a [math]\displaystyle{ k \times n }[/math] matrix, where [math]\displaystyle{ k }[/math] is the rank of [math]\displaystyle{ M }[/math]. Various ways to obtain such a decomposition include computing the Cholesky decomposition or taking the non-negative square root of [math]\displaystyle{ M }[/math].

The columns [math]\displaystyle{ b^{(1)},\dots,b^{(n)} }[/math] of [math]\displaystyle{ B }[/math] can be seen as n vectors in [math]\displaystyle{ \mathbb{C}^k }[/math] (or k-dimensional Euclidean space [math]\displaystyle{ \mathbb{R}^k }[/math], in the real case). Then

[math]\displaystyle{ M_{ij} = b^{(i)} \cdot b^{(j)} }[/math]

where the dot product [math]\displaystyle{ a \cdot b = \sum_{\ell=1}^k \overline{a_\ell} b_\ell }[/math] is the usual inner product on [math]\displaystyle{ \mathbb{C}^k }[/math].

Thus a Hermitian matrix [math]\displaystyle{ M }[/math] is positive semidefinite if and only if it is the Gram matrix of some vectors [math]\displaystyle{ b^{(1)},\dots,b^{(n)} }[/math]. Such vectors are called a vector realization of [math]\displaystyle{ M }[/math]. The infinite-dimensional analog of this statement is Mercer's theorem.

2.3. Uniqueness of Vector Realizations

If [math]\displaystyle{ M }[/math] is the Gram matrix of vectors [math]\displaystyle{ v_1,\dots,v_n }[/math] in [math]\displaystyle{ \mathbb{R}^k }[/math], then applying any rotation or reflection of [math]\displaystyle{ \mathbb{R}^k }[/math] (any orthogonal transformation, that is, any Euclidean isometry preserving 0) to the sequence of vectors results in the same Gram matrix. That is, for any [math]\displaystyle{ k \times k }[/math] orthogonal matrix [math]\displaystyle{ Q }[/math], the Gram matrix of [math]\displaystyle{ Q v_1,\dots, Q v_n }[/math] is also [math]\displaystyle{ M }[/math].

This is the only way in which two real vector realizations of [math]\displaystyle{ M }[/math] can differ: the vectors [math]\displaystyle{ v_1,\dots,v_n }[/math] are unique up to orthogonal transformations. In other words, the dot products [math]\displaystyle{ v_i \cdot v_j }[/math] and [math]\displaystyle{ w_i \cdot w_j }[/math] are equal if and only if some rigid transformation of [math]\displaystyle{ \mathbb{R}^k }[/math] transforms the vectors [math]\displaystyle{ v_1,\dots,v_n }[/math] to [math]\displaystyle{ w_1,\dots,w_n }[/math] and 0 to 0.

The same holds in the complex case, with unitary transformations in place of orthogonal ones. That is, if the Gram matrix of vectors [math]\displaystyle{ v_1,\dots,v_n }[/math] is equal to the Gram matrix of vectors [math]\displaystyle{ w_1,\dots,w_n }[/math] in [math]\displaystyle{ \mathbb{C}^k }[/math], then there is a unitary [math]\displaystyle{ k \times k }[/math] matrix [math]\displaystyle{ U }[/math] (meaning [math]\displaystyle{ U^* U = I }[/math]) such that [math]\displaystyle{ v_i = U w_i }[/math] for [math]\displaystyle{ i = 1,\dots,n }[/math].[3]

2.4. Other Properties

  • The Gram matrix of any orthonormal basis is the identity matrix.
  • The rank of the Gram matrix of vectors in [math]\displaystyle{ \mathbb{R}^k }[/math] or [math]\displaystyle{ \mathbb{C}^k }[/math] equals the dimension of the space spanned by these vectors.[2]

3. Gram Determinant

The Gram determinant or Gramian is the determinant of the Gram matrix:

[math]\displaystyle{ G(x_1,\dots, x_n)=\begin{vmatrix} \langle x_1,x_1\rangle & \langle x_1,x_2\rangle &\dots & \langle x_1,x_n\rangle\\ \langle x_2,x_1\rangle & \langle x_2,x_2\rangle &\dots & \langle x_2,x_n\rangle\\ \vdots&\vdots&\ddots&\vdots\\ \langle x_n,x_1\rangle & \langle x_n,x_2\rangle &\dots & \langle x_n,x_n\rangle\end{vmatrix}. }[/math]

If [math]\displaystyle{ x_1, \cdots, x_n }[/math] are vectors in [math]\displaystyle{ \mathbb{R}^n }[/math], then it is the square of the n-dimensional volume of the parallelotope formed by the vectors. In particular, the vectors are linearly independent if and only if the parallelotope has nonzero n-dimensional volume, if and only if Gram determinant is nonzero, if and only if the Gram matrix is nonsingular.

The Gram determinant can also be expressed in terms of the exterior product of vectors by

[math]\displaystyle{ G(x_1,\dots,x_n) = \| x_1\wedge\cdots\wedge x_n\|^2. }[/math]

References

  1. Lanckriet, G. R. G.; Cristianini, N.; Bartlett, P.; Ghaoui, L. E.; Jordan, M. I. (2004). "Learning the kernel matrix with semidefinite programming". Journal of Machine Learning Research 5: 27–72 [p. 29]. https://dl.acm.org/citation.cfm?id=894170. 
  2. Horn & Johnson 2013, p. 441, p.441, Theorem 7.2.10
  3. (Horn Johnson), p. 452, Theorem 7.3.11
More
Information
Subjects: Others
Contributor MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register :
View Times: 6.3K
Entry Collection: HandWiki
Revision: 1 time (View History)
Update Date: 27 Oct 2022
1000/1000
ScholarVision Creations