Lie's Theorem: History
Please note this is an old version of this entry, which may differ significantly from the current revision.
Contributor:

In mathematics, specifically the theory of Lie algebras, Lie's theorem states that, over an algebraically closed field of characteristic zero, if $\displaystyle{ \pi: \mathfrak{g} \to \mathfrak{gl}(V) }$ is a finite-dimensional representation of a solvable Lie algebra, then $\displaystyle{ \pi(\mathfrak{g}) }$ stabilizes a flag $\displaystyle{ V = V_0 \supset V_1 \supset \cdots \supset V_n = 0, \operatorname{codim} V_i = i }$; "stabilizes" means $\displaystyle{ \pi(X) V_i \subset V_i }$ for each $\displaystyle{ X \in \mathfrak{g} }$ and i. Put in another way, the theorem says there is a basis for V such that all linear transformations in $\displaystyle{ \pi(\mathfrak{g}) }$ are represented by upper triangular matrices. This is a generalization of the result of Frobenius that commuting matrices are simultaneously upper triangularizable, as commuting matrices form an abelian Lie algebra, which is a fortiori solvable. A consequence of Lie's theorem is that any finite dimensional solvable Lie algebra over a field of characteristic 0 has a nilpotent derived algebra (see #Consequences). Also, to each flag in a finite-dimensional vector space V, there correspond a Borel subalgebra (that consist of linear transformations stabilizing the flag); thus, the theorem says that $\displaystyle{ \pi(\mathfrak{g}) }$ is contained in some Borel subalgebra of $\displaystyle{ \mathfrak{gl}(V) }$.

• linear transformations
• triangularizable
• subalgebra

## 1. Counter-Example

For algebraically closed fields of characteristic p>0 Lie's theorem holds provided the dimension of the representation is less than p (see the proof below), but can fail for representations of dimension p. An example is given by the 3-dimensional nilpotent Lie algebra spanned by 1, x, and d/dx acting on the p-dimensional vector space k[x]/(xp), which has no eigenvectors. Taking the semidirect product of this 3-dimensional Lie algebra by the p-dimensional representation (considered as an abelian Lie algebra) gives a solvable Lie algebra whose derived algebra is not nilpotent.

## 2. Proof

The proof is by induction on the dimension of $\displaystyle{ \mathfrak{g} }$ and consists of several steps. (Note: the structure of the proof is very similar to that for Engel's theorem.) The basic case is trivial and we assume the dimension of $\displaystyle{ \mathfrak{g} }$ is positive. We also assume V is not zero. For simplicity, we write $\displaystyle{ X \cdot v = \pi(X) v }$.

Step 1: Observe that the theorem is equivalent to the statement:[1]

• There exists a vector in V that is an eigenvector for each linear transformation in $\displaystyle{ \pi(\mathfrak{g}) }$.
Indeed, the theorem says in particular that a nonzero vector spanning $\displaystyle{ V_{n-1} }$ is a common eigenvector for all the linear transformations in $\displaystyle{ \pi(\mathfrak{g}) }$. Conversely, if v is a common eigenvector, take $\displaystyle{ V_{n-1} }$ to its span and then $\displaystyle{ \pi(\mathfrak{g}) }$ admits a common eigenvector in the quotient $\displaystyle{ V/V_{n-1} }$; repeat the argument.

Step 2: Find an ideal $\displaystyle{ \mathfrak{h} }$ of codimension one in $\displaystyle{ \mathfrak{g} }$.

Let $\displaystyle{ D\mathfrak{g} = [\mathfrak{g}, \mathfrak{g}] }$ be the derived algebra. Since $\displaystyle{ \mathfrak{g} }$ is solvable and has positive dimension, $\displaystyle{ D\mathfrak{g} \ne \mathfrak{g} }$ and so the quotient $\displaystyle{ \mathfrak{g}/D\mathfrak{g} }$ is a nonzero abelian Lie algebra, which certainly contains an ideal of codimension one and by the ideal correspondence, it corresponds to an ideal of codimension one in $\displaystyle{ \mathfrak{g} }$.

Step 3: There exists some linear functional $\displaystyle{ \lambda }$ in $\displaystyle{ \mathfrak{h}^* }$ such that

$\displaystyle{ V_{\lambda} = \{ v \in V | X \cdot v = \lambda(X) v, X \in \mathfrak{h} \} }$

is nonzero.

This follows from the inductive hypothesis (it is easy to check that the eigenvalues determine a linear functional).

Step 4: $\displaystyle{ V_{\lambda} }$ is a $\displaystyle{ \mathfrak{g} }$-module.

(Note this step proves a general fact and does not involve solvability.)
Let $\displaystyle{ Y }$ be in $\displaystyle{ \mathfrak{g} }$, $\displaystyle{ v \in V_{\lambda} }$ and set recursively $\displaystyle{ v_0 = v, \, v_{i+1} = Y \cdot v_i }$. For any $\displaystyle{ X \in \mathfrak{h} }$, since $\displaystyle{ \mathfrak{h} }$ is an ideal,
$\displaystyle{ X \cdot v_i = \lambda(X) v_i + \lambda([X, Y]) v_{i-1} }$.
This says that $\displaystyle{ X }$ (that is $\displaystyle{ \pi(X) }$) restricted to $\displaystyle{ U = \operatorname{span} \{ v_i | i \ge 0 \} }$ is represented by a matrix whose diagonal is $\displaystyle{ \lambda(X) }$ repeated. Hence, $\displaystyle{ \dim(U) \lambda([X, Y]) = \operatorname{tr}([\pi(X)|_U, \pi(Y)|_U]) = 0 }$. Since $\displaystyle{ \dim(U) }$ is invertible, $\displaystyle{ \lambda([X, Y]) = 0 }$ and $\displaystyle{ Y \cdot v }$ is an eigenvector for X.

Step 5: Finish up the proof by finding a common eigenvector.

Write $\displaystyle{ \mathfrak{g} = \mathfrak{h} + L }$ where L is a one-dimensional vector subspace. Since the base field k is algebraically closed, there exists an eigenvector in $\displaystyle{ V_{\lambda} }$ for some (thus every) nonzero element of L. Since that vector is also eigenvector for each element of $\displaystyle{ \mathfrak{h} }$, the proof is complete. $\displaystyle{ \square }$

## 3. Consequences

The theorem applies in particular to the adjoint representation $\displaystyle{ \operatorname{ad}: \mathfrak{g} \to \mathfrak{gl}(\mathfrak{g}) }$ of a (finite-dimensional) solvable Lie algebra $\displaystyle{ \mathfrak{g} }$; thus, one can choose a basis on $\displaystyle{ \mathfrak{g} }$ with respect to which $\displaystyle{ \operatorname{ad}(\mathfrak{g}) }$ consists of upper-triangular matrices. It follows easily that for each $\displaystyle{ x, y \in \mathfrak{g} }$, $\displaystyle{ \operatorname{ad}([x, y]) = [\operatorname{ad}(x), \operatorname{ad}(y)] }$ has diagonal consisting of zeros; i.e., $\displaystyle{ \operatorname{ad}([x, y]) }$ is a nilpotent matrix. By Engel's theorem, this implies that $\displaystyle{ [\mathfrak g, \mathfrak g] }$ is a nilpotent Lie algebra; the converse is obviously true as well. Moreover, whether a linear transformation is nilpotent or not can be determined after extending the base field to its algebraic closure. Hence, one concludes the statement:[2]

A finite-dimensional Lie algebra $\displaystyle{ \mathfrak g }$ over a field of characteristic zero is solvable if and only if the derived algebra $\displaystyle{ D \mathfrak g = [\mathfrak g, \mathfrak g] }$ is nilpotent.

Lie's theorem also establishes one direction in Cartan's criterion for solvability: if V is a finite-dimensional vector over a field of characteristic zero and $\displaystyle{ \mathfrak{g} \subset \mathfrak{gl}(V) }$ a Lie subalgebra, then $\displaystyle{ \mathfrak{g} }$ is solvable if and only if $\displaystyle{ \operatorname{tr}(XY) = 0 }$ for every $\displaystyle{ X \in \mathfrak{g} }$ and $\displaystyle{ Y \in [\mathfrak{g}, \mathfrak{g}] }$.[3]

Indeed, as above, after extending the base field, the implication $\displaystyle{ \Rightarrow }$ is seen easily. (The converse is more difficult to prove.)

Lie's theorem (for various V) is equivalent to the statement:[4]

For a solvable Lie algebra $\displaystyle{ \mathfrak g }$, each finite-dimensional simple $\displaystyle{ \mathfrak{g} }$-module (i.e., irreducible as a representation) has dimension one.

Indeed, Lie's theorem clearly implies this statement. Conversely, assume the statement is true. Given a finite-dimensional $\displaystyle{ \mathfrak g }$-module V, let $\displaystyle{ V_1 }$ be a maximal $\displaystyle{ \mathfrak g }$-submodule (which exists by finiteness of the dimension). Then, by maximality, $\displaystyle{ V/V_1 }$ is simple; thus, is one-dimensional. The induction now finishes the proof.

The statement says in particular that a finite-dimensional simple module over an abelian Lie algebra is one-dimensional; this fact remains true without the assumption that the base field has characteristic zero.[5]

Here is another quite useful application:[6]

Let $\displaystyle{ \mathfrak{g} }$ be a finite-dimensional Lie algebra over an algebraically closed field of characteristic zero with radical $\displaystyle{ \operatorname{rad}(\mathfrak{g}) }$. Then each finite-dimensional simple representation $\displaystyle{ \pi: \mathfrak{g} \to \mathfrak{gl}(V) }$ is the tensor product of a simple representation of $\displaystyle{ \mathfrak{g}/\operatorname{rad}(\mathfrak{g}) }$ with a one-dimensional representation of $\displaystyle{ \mathfrak{g} }$ (i.e., a linear functional vanishing on Lie brackets).

By Lie's theorem, we can find a linear functional $\displaystyle{ \lambda }$ of $\displaystyle{ \operatorname{rad}(\mathfrak{g}) }$ so that there is the weight space $\displaystyle{ V_{\lambda} }$ of $\displaystyle{ \operatorname{rad}(\mathfrak{g}) }$. By Step 4 of the proof of Lie's theorem, $\displaystyle{ V_{\lambda} }$ is also a $\displaystyle{ \mathfrak{g} }$-module; so $\displaystyle{ V = V_{\lambda} }$. In particular, for each $\displaystyle{ X \in \operatorname{rad}(\mathfrak{g}) }$, $\displaystyle{ \operatorname{tr}(\pi(X)) = \dim(V) \lambda(X) }$. Extend $\displaystyle{ \lambda }$ to a linear functional on $\displaystyle{ \mathfrak{g} }$ that vanishes on $\displaystyle{ [\mathfrak g, \mathfrak g] }$; $\displaystyle{ \lambda }$ is then a one-dimensional representation of $\displaystyle{ \mathfrak{g} }$. Now, $\displaystyle{ (\pi, V) \simeq (\pi, V) \otimes (-\lambda) \otimes \lambda }$. Since $\displaystyle{ \pi }$ coincides with $\displaystyle{ \lambda }$ on $\displaystyle{ \operatorname{rad}(\mathfrak{g}) }$, we have that $\displaystyle{ V \otimes (-\lambda) }$ is trivial on $\displaystyle{ \operatorname{rad}(\mathfrak{g}) }$ and thus is the restriction of a (simple) representation of $\displaystyle{ \mathfrak{g}/\operatorname{rad}(\mathfrak{g}) }$. $\displaystyle{ \square }$

The content is sourced from: https://handwiki.org/wiki/Lie%27s_theorem

### References

1. Serre, Theorem 3″
2. Humphreys, Ch. II, § 4.1., Corollary C.
3. Serre, Theorem 4
4. Serre, Theorem 3'
5. Jacobson, Ch. II, § 6, Lemma 5.
6. Fulton & Harris, Proposition 9.17.
More
This entry is offline, you can click here to edit this entry!