You're using an outdated browser. Please upgrade to a modern browser for the best experience.
Gramian Matrix
Edit

In linear algebra, the Gram matrix (or Gramian matrix, Gramian) of a set of vectors v1,,vn in an inner product space is the Hermitian matrix of inner products, whose entries are given by Gij=vi,vj. An important application is to compute linear independence: a set of vectors are linearly independent if and only if the Gram determinant (the determinant of the Gram matrix) is non-zero. It is named after Jørgen Pedersen Gram.

inner product space g_{ij}\langle linearly

1. Examples

For finite-dimensional real vectors in Rn with the usual Euclidean dot product, the Gram matrix is simply G=VTV, where V is a matrix whose columns are the vectors vk. For complex vectors in Cn, G=VHV, where VH is the conjugate transpose of V.

Given square-integrable functions {i(),i=1,,n} on the interval [t0,tf], the Gram matrix G=[Gij] is:

Gij=t0tfi(τ)j¯(τ)dτ.

For any bilinear form B on a finite-dimensional vector space over any field we can define a Gram matrix G attached to a set of vectors v1,,vn by Gij=B(vi,vj). The matrix will be symmetric if the bilinear form B is symmetric.

1.1. Applications

  • In Riemannian geometry, given an embedded k-dimensional Riemannian manifold MRn and a coordinate chart ϕ:UM for (x1,,xk)URk, the volume form ω on M induced by the embedding may be computed using the Gramian of the coordinate tangent vectors:
Undefined control sequence \tfrac

This generalizes the classical surface integral of a parametrized surface ϕ:USR3 for (x,y)UR2:

Undefined control sequence \tfrac
  • If the vectors are centered random variables, the Gramian is approximately proportional to the covariance matrix, with the scaling determined by the number of elements in the vector.
  • In quantum chemistry, the Gram matrix of a set of basis vectors is the overlap matrix.
  • In control theory (or more generally systems theory), the controllability Gramian and observability Gramian determine properties of a linear system.
  • Gramian matrices arise in covariance structure model fitting (see e.g., Jamshidian and Bentler, 1993, Applied Psychological Measurement, Volume 18, pp. 79–94).
  • In the finite element method, the Gram matrix arises from approximating a function from a finite dimensional space; the Gram matrix entries are then the inner products of the basis functions of the finite dimensional subspace.
  • In machine learning, kernel functions are often represented as Gram matrices.[1]
  • Since the Gram matrix over the reals is a symmetric matrix, it is diagonalizable and its eigenvalues are non-negative. The diagonalization of the Gram matrix is the singular value decomposition.

2. Properties

2.1. Positive-Semidefiniteness

The Gram matrix is symmetric in the case the real product is real-valued; it is Hermitian in the general, complex case by definition of an inner product.

The Gram matrix is positive semidefinite, and every positive semidefinite matrix is the Gramian matrix for some set of vectors. The fact that the Gramian matrix is positive-semidefinite can be seen from the following simple derivation:

xTGx=i,jxixjvi,vj=i,jxivi,xjvj=ixivi,jxjvj=ixivi20.

The first equality follows from the definition of matrix multiplication, the second and third from the bi-linearity of the inner-product, and the last from the positive definiteness of the inner product. Note that this also shows that the Gramian matrix is positive definite if and only if the vectors vi are linearly independent (that is, ixivi0 for all x).[2]

2.2. Finding a Vector Realization

Given any positive semidefinite matrix M, one can be decompose it as:

M=BB,

where B is the conjugate transpose of B (or M=BTB in the real case). Here B is a k×n matrix, where k is the rank of M. Various ways to obtain such a decomposition include computing the Cholesky decomposition or taking the non-negative square root of M.

The columns b(1),,b(n) of B can be seen as n vectors in Ck (or k-dimensional Euclidean space Rk, in the real case). Then

Mij=b(i)b(j)

where the dot product ab==1kab is the usual inner product on Ck.

Thus a Hermitian matrix M is positive semidefinite if and only if it is the Gram matrix of some vectors b(1),,b(n). Such vectors are called a vector realization of M. The infinite-dimensional analog of this statement is Mercer's theorem.

2.3. Uniqueness of Vector Realizations

If M is the Gram matrix of vectors v1,,vn in Rk, then applying any rotation or reflection of Rk (any orthogonal transformation, that is, any Euclidean isometry preserving 0) to the sequence of vectors results in the same Gram matrix. That is, for any k×k orthogonal matrix Q, the Gram matrix of Qv1,,Qvn is also M.

This is the only way in which two real vector realizations of M can differ: the vectors v1,,vn are unique up to orthogonal transformations. In other words, the dot products vivj and wiwj are equal if and only if some rigid transformation of Rk transforms the vectors v1,,vn to w1,,wn and 0 to 0.

The same holds in the complex case, with unitary transformations in place of orthogonal ones. That is, if the Gram matrix of vectors v1,,vn is equal to the Gram matrix of vectors w1,,wn in Ck, then there is a unitary k×k matrix U (meaning UU=I) such that vi=Uwi for i=1,,n.[3]

2.4. Other Properties

  • The Gram matrix of any orthonormal basis is the identity matrix.
  • The rank of the Gram matrix of vectors in Rk or Ck equals the dimension of the space spanned by these vectors.[2]

3. Gram Determinant

The Gram determinant or Gramian is the determinant of the Gram matrix:

Unknown environment 'vmatrix'

If x1,,xn are vectors in Rn, then it is the square of the n-dimensional volume of the parallelotope formed by the vectors. In particular, the vectors are linearly independent if and only if the parallelotope has nonzero n-dimensional volume, if and only if Gram determinant is nonzero, if and only if the Gram matrix is nonsingular.

The Gram determinant can also be expressed in terms of the exterior product of vectors by

G(x1,,xn)=x1xn2.

References

  1. Lanckriet, G. R. G.; Cristianini, N.; Bartlett, P.; Ghaoui, L. E.; Jordan, M. I. (2004). "Learning the kernel matrix with semidefinite programming". Journal of Machine Learning Research 5: 27–72 [p. 29]. https://dl.acm.org/citation.cfm?id=894170. ;
  2. Horn & Johnson 2013, p. 441, p.441, Theorem 7.2.10
  3. (Horn Johnson), p. 452, Theorem 7.3.11
More
Related Content
The Likert-type scale is a widely used psychometric instrument for measuring attitudes, opinions, or perceptions in research contexts. It presents respondents with a series of statements accompanied by symmetrical response options, typically structured on a five-point scale ranging from “Strongly Disagree” to “Strongly Agree”. Each point on the scale represents a gradation of agreement or sentiment, allowing researchers to transform subjective responses into quantifiable data for statistical analysis and interpretation.
Keywords: Likert scale; measurement; psychometrics; scale development; questionnaire design; survey design
The increasing complexity of social science data and phenomena necessitates using advanced analytical techniques to capture nonlinear relationships that traditional linear models often overlook. This chapter explores the application of machine learning (ML) models in social science research, focusing on their ability to manage nonlinear interactions in multidimensional datasets. Nonlinear relationships are central to understanding social behaviors, socioeconomic factors, and psychological processes. Machine learning models, including decision trees, neural networks, random forests, and support vector machines, provide a flexible framework for capturing these intricate patterns. The chapter begins by examining the limitations of linear models and introduces essential machine learning techniques suited for nonlinear modeling. A discussion follows on how these models automatically detect interactions and threshold effects, offering superior predictive power and robustness against noise compared to traditional methods. The chapter also covers the practical challenges of model evaluation, validation, and handling imbalanced data, emphasizing cross-validation and performance metrics tailored to the nuances of social science datasets. Practical recommendations are offered to researchers, highlighting the balance between predictive accuracy and model interpretability, ethical considerations, and best practices for communicating results to diverse stakeholders. This chapter demonstrates that while machine learning models provide robust solutions for modeling nonlinear relationships, their successful application in social sciences requires careful attention to data quality, model selection, validation, and ethical considerations. Machine learning holds transformative potential for understanding complex social phenomena and informing data-driven psychology, sociology, and political science policy-making.
Keywords: machine learning in social sciences; nonlinear relationships; model interpretability; predictive analytics; imbalanced data handling
This entry provides a comprehensive overview of methods used in image matching. It starts by introducing area-based matching, outlining well-established techniques for determining correspondences. Then, it presents the concept of feature-based image matching, covering feature point detection and description issues, including both handcrafted and learning-based operators. Brief presentations of frequently used detectors and descriptors are included, followed by a presentation of descriptor matching and outlier rejection techniques. Finally, the entry provides a brief overview of relational matching.
Keywords: photogrammetry; computer vision; image matching; feature-based matching; area-based matching; relational matching; handcrafted operators; learning-based operators; outlier rejection
The synergy between Newcomb-Benford and Bayes' laws provides a universal framework for comprehending information, probability, conformality, and computational intelligence.
Keywords: Newcomb-Benford Law; harmt (harmonic unit of information); likelihood; Canonical PMF; Global-local duality; Bayesian Law; Secretary problem; Cross-ratio; Coding source; Conformability
An article about the term "synchronicity" defined as the occurrence of meaningful coincidences that seem to have no cause.
Keywords: synchronicity; coincidences; Carl Jung
Information
Subjects: Others
Contributor MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register :
View Times: 7.6K
Entry Collection: HandWiki
Revision: 1 time (View History)
Update Date: 27 Oct 2022
Academic Video Service