In mathematics, specifically the theory of Lie algebras, Lie's theorem states that, over an algebraically closed field of characteristic zero, if [math]\displaystyle{ \pi: \mathfrak{g} \to \mathfrak{gl}(V) }[/math] is a finite-dimensional representation of a solvable Lie algebra, then [math]\displaystyle{ \pi(\mathfrak{g}) }[/math] stabilizes a flag [math]\displaystyle{ V = V_0 \supset V_1 \supset \cdots \supset V_n = 0, \operatorname{codim} V_i = i }[/math]; "stabilizes" means [math]\displaystyle{ \pi(X) V_i \subset V_i }[/math] for each [math]\displaystyle{ X \in \mathfrak{g} }[/math] and i. Put in another way, the theorem says there is a basis for V such that all linear transformations in [math]\displaystyle{ \pi(\mathfrak{g}) }[/math] are represented by upper triangular matrices. This is a generalization of the result of Frobenius that commuting matrices are simultaneously upper triangularizable, as commuting matrices form an abelian Lie algebra, which is a fortiori solvable. A consequence of Lie's theorem is that any finite dimensional solvable Lie algebra over a field of characteristic 0 has a nilpotent derived algebra (see #Consequences). Also, to each flag in a finite-dimensional vector space V, there correspond a Borel subalgebra (that consist of linear transformations stabilizing the flag); thus, the theorem says that [math]\displaystyle{ \pi(\mathfrak{g}) }[/math] is contained in some Borel subalgebra of [math]\displaystyle{ \mathfrak{gl}(V) }[/math].
For algebraically closed fields of characteristic p>0 Lie's theorem holds provided the dimension of the representation is less than p (see the proof below), but can fail for representations of dimension p. An example is given by the 3-dimensional nilpotent Lie algebra spanned by 1, x, and d/dx acting on the p-dimensional vector space k[x]/(xp), which has no eigenvectors. Taking the semidirect product of this 3-dimensional Lie algebra by the p-dimensional representation (considered as an abelian Lie algebra) gives a solvable Lie algebra whose derived algebra is not nilpotent.
The proof is by induction on the dimension of [math]\displaystyle{ \mathfrak{g} }[/math] and consists of several steps. (Note: the structure of the proof is very similar to that for Engel's theorem.) The basic case is trivial and we assume the dimension of [math]\displaystyle{ \mathfrak{g} }[/math] is positive. We also assume V is not zero. For simplicity, we write [math]\displaystyle{ X \cdot v = \pi(X) v }[/math].
Step 1: Observe that the theorem is equivalent to the statement:^{[1]}
Step 2: Find an ideal [math]\displaystyle{ \mathfrak{h} }[/math] of codimension one in [math]\displaystyle{ \mathfrak{g} }[/math].
Step 3: There exists some linear functional [math]\displaystyle{ \lambda }[/math] in [math]\displaystyle{ \mathfrak{h}^* }[/math] such that
is nonzero.
Step 4: [math]\displaystyle{ V_{\lambda} }[/math] is a [math]\displaystyle{ \mathfrak{g} }[/math]-module.
Step 5: Finish up the proof by finding a common eigenvector.
The theorem applies in particular to the adjoint representation [math]\displaystyle{ \operatorname{ad}: \mathfrak{g} \to \mathfrak{gl}(\mathfrak{g}) }[/math] of a (finite-dimensional) solvable Lie algebra [math]\displaystyle{ \mathfrak{g} }[/math]; thus, one can choose a basis on [math]\displaystyle{ \mathfrak{g} }[/math] with respect to which [math]\displaystyle{ \operatorname{ad}(\mathfrak{g}) }[/math] consists of upper-triangular matrices. It follows easily that for each [math]\displaystyle{ x, y \in \mathfrak{g} }[/math], [math]\displaystyle{ \operatorname{ad}([x, y]) = [\operatorname{ad}(x), \operatorname{ad}(y)] }[/math] has diagonal consisting of zeros; i.e., [math]\displaystyle{ \operatorname{ad}([x, y]) }[/math] is a nilpotent matrix. By Engel's theorem, this implies that [math]\displaystyle{ [\mathfrak g, \mathfrak g] }[/math] is a nilpotent Lie algebra; the converse is obviously true as well. Moreover, whether a linear transformation is nilpotent or not can be determined after extending the base field to its algebraic closure. Hence, one concludes the statement:^{[2]}
Lie's theorem also establishes one direction in Cartan's criterion for solvability: if V is a finite-dimensional vector over a field of characteristic zero and [math]\displaystyle{ \mathfrak{g} \subset \mathfrak{gl}(V) }[/math] a Lie subalgebra, then [math]\displaystyle{ \mathfrak{g} }[/math] is solvable if and only if [math]\displaystyle{ \operatorname{tr}(XY) = 0 }[/math] for every [math]\displaystyle{ X \in \mathfrak{g} }[/math] and [math]\displaystyle{ Y \in [\mathfrak{g}, \mathfrak{g}] }[/math].^{[3]}
Indeed, as above, after extending the base field, the implication [math]\displaystyle{ \Rightarrow }[/math] is seen easily. (The converse is more difficult to prove.)
Lie's theorem (for various V) is equivalent to the statement:^{[4]}
Indeed, Lie's theorem clearly implies this statement. Conversely, assume the statement is true. Given a finite-dimensional [math]\displaystyle{ \mathfrak g }[/math]-module V, let [math]\displaystyle{ V_1 }[/math] be a maximal [math]\displaystyle{ \mathfrak g }[/math]-submodule (which exists by finiteness of the dimension). Then, by maximality, [math]\displaystyle{ V/V_1 }[/math] is simple; thus, is one-dimensional. The induction now finishes the proof.
The statement says in particular that a finite-dimensional simple module over an abelian Lie algebra is one-dimensional; this fact remains true without the assumption that the base field has characteristic zero.^{[5]}
Here is another quite useful application:^{[6]}
By Lie's theorem, we can find a linear functional [math]\displaystyle{ \lambda }[/math] of [math]\displaystyle{ \operatorname{rad}(\mathfrak{g}) }[/math] so that there is the weight space [math]\displaystyle{ V_{\lambda} }[/math] of [math]\displaystyle{ \operatorname{rad}(\mathfrak{g}) }[/math]. By Step 4 of the proof of Lie's theorem, [math]\displaystyle{ V_{\lambda} }[/math] is also a [math]\displaystyle{ \mathfrak{g} }[/math]-module; so [math]\displaystyle{ V = V_{\lambda} }[/math]. In particular, for each [math]\displaystyle{ X \in \operatorname{rad}(\mathfrak{g}) }[/math], [math]\displaystyle{ \operatorname{tr}(\pi(X)) = \dim(V) \lambda(X) }[/math]. Extend [math]\displaystyle{ \lambda }[/math] to a linear functional on [math]\displaystyle{ \mathfrak{g} }[/math] that vanishes on [math]\displaystyle{ [\mathfrak g, \mathfrak g] }[/math]; [math]\displaystyle{ \lambda }[/math] is then a one-dimensional representation of [math]\displaystyle{ \mathfrak{g} }[/math]. Now, [math]\displaystyle{ (\pi, V) \simeq (\pi, V) \otimes (-\lambda) \otimes \lambda }[/math]. Since [math]\displaystyle{ \pi }[/math] coincides with [math]\displaystyle{ \lambda }[/math] on [math]\displaystyle{ \operatorname{rad}(\mathfrak{g}) }[/math], we have that [math]\displaystyle{ V \otimes (-\lambda) }[/math] is trivial on [math]\displaystyle{ \operatorname{rad}(\mathfrak{g}) }[/math] and thus is the restriction of a (simple) representation of [math]\displaystyle{ \mathfrak{g}/\operatorname{rad}(\mathfrak{g}) }[/math]. [math]\displaystyle{ \square }[/math]