You're using an outdated browser. Please upgrade to a modern browser for the best experience.
Magnus Expansion
Edit

In mathematics and physics, the Magnus expansion, named after Wilhelm Magnus (1907–1990), provides an exponential representation of the solution of a first-order homogeneous linear differential equation for a linear operator. In particular, it furnishes the fundamental matrix of a system of linear ordinary differential equations of order n with varying coefficients. The exponent is aggregated as an infinite series, whose terms involve multiple integrals and nested commutators.

fundamental matrix differential equation physics

1. The Deterministic Case

1.1. Magnus Approach and Its Interpretation

Given the n × n coefficient matrix A(t), one wishes to solve the initial-value problem associated with the linear ordinary differential equation

Y(t)=A(t)Y(t),Y(t0)=Y0

for the unknown n-dimensional vector function Y(t).

When n = 1, the solution simply reads

Y(t)=exp(t0tA(s)ds)Y0.

This is still valid for n > 1 if the matrix A(t) satisfies A(t1) A(t2) = A(t2) A(t1) for any pair of values of t, t1 and t2. In particular, this is the case if the matrix A is independent of t. In the general case, however, the expression above is no longer the solution of the problem.

The approach introduced by Magnus to solve the matrix initial-value problem is to express the solution by means of the exponential of a certain n × n matrix function Ω(t, t0):

Y(t)=exp(Ω(t,t0))Y0,

which is subsequently constructed as a series expansion:

Ω(t)=k=1Ωk(t),

where, for simplicity, it is customary to write Ω(t) for Ω(t, t0) and to take t0 = 0.

Magnus appreciated that, since d/dt (eΩ) e−Ω = A(t), using a Poincaré−Hausdorff matrix identity, he could relate the time derivative of Ω to the generating function of Bernoulli numbers and the adjoint endomorphism of Ω,

Undefined control sequence \operatorname

to solve for Ω recursively in terms of A "in a continuous analog of the CBH expansion", as outlined in a subsequent section.

The equation above constitutes the Magnus expansion, or Magnus series, for the solution of matrix linear initial-value problem. The first four terms of this series read

Unknown environment 'align'

where [A, B] ≡ A BB A is the matrix commutator of A and B.

These equations may be interpreted as follows: Ω1(t) coincides exactly with the exponent in the scalar (n = 1) case, but this equation cannot give the whole solution. If one insists in having an exponential representation (Lie group), the exponent needs to be corrected. The rest of the Magnus series provides that correction systematically: Ω or parts of it are in the Lie algebra of the Lie group on the solution.

In applications, one can rarely sum exactly the Magnus series, and one has to truncate it to get approximate solutions. The main advantage of the Magnus proposal is that the truncated series very often shares important qualitative properties with the exact solution, at variance with other conventional perturbation theories. For instance, in classical mechanics the symplectic character of the time evolution is preserved at every order of approximation. Similarly, the unitary character of the time evolution operator in quantum mechanics is also preserved (in contrast, e.g., to the Dyson series solving the same problem).

1.2. Convergence of the Expansion

From a mathematical point of view, the convergence problem is the following: given a certain matrix A(t), when can the exponent Ω(t) be obtained as the sum of the Magnus series?

A sufficient condition for this series to converge for t ∈ [0,T) is

0TA(s)2ds<π,

where 2 denotes a matrix norm. This result is generic in the sense that one may construct specific matrices A(t) for which the series diverges for any t > T.

1.3. Magnus Generator

A recursive procedure to generate all the terms in the Magnus expansion utilizes the matrices Sn(k) defined recursively through

Sn(j)=m=1nj[Ωm,Snm(j1)],2jn1,
Undefined control sequence \operatorname

which then furnish

Ω1=0tA(τ)dτ,
Ωn=j=1n1Bjj!0tSn(j)(τ)dτ,n2.

Here adkΩ is a shorthand for an iterated commutator (see adjoint endomorphism):

Undefined control sequence \operatorname

while Bj are the Bernoulli numbers with B1 = −1/2.

Finally, when this recursion is worked out explicitly, it is possible to express Ωn(t) as a linear combination of n-fold integrals of n − 1 nested commutators involving n matrices A:

Undefined control sequence \operatorname

which becomes increasingly intricate with n.

2. The Stochastic Case

2.1. Extension To Stochastic Ordinary Differential Equations

For the extension to the stochastic case let (Wt)t[0,T] be a Rq-dimensional Brownian motion, qN>0, on the probability space (Ω,F,P) with finite time horizon T>0 and natural filtration. Now, consider the linear matrix-valued stochastic Itô differential equation (with Einstein's summation convention over the index j)

dXt=BtXtdt+At(j)XtdWtj,X0=Id,dN>0,

where B,A(1),,A(j) are progressively measurable d×d-valued bounded stochastic processes and Id is the identity matrix. Following the same approach as in the deterministic case with alterations due to the stochastic setting[1] the corresponding matrix logarithm will turn out as an Itô-process, whose first two expansion orders are given by Yt(1)=Yt(1,0)+Yt(0,1) and Yt(2)=Yt(2,0)+Yt(1,1)+Yt(0,2), where with Einstein's summation convention over i and j

Unknown environment 'align'

2.2. Convergence of the Expansion

In the stochastic setting the convergence will now be subject to a stopping time τ and a first convergence result is given by:[2]

Under the previous assumption on the coefficients there exists a strong solution X=(Xt)t[0,T], as well as a strictly positive stopping time τT such that:

  1. Xt has a real logarithm Yt up to time τ, i.e.
    Xt=eYt,0t<τ;
  2. the following representation holds P-almost surely:
    Yt=n=0Yt(n),0t<τ,
    where Y(n) is the n-th term in the stochastic Magnus expansion as defined below in the subsection Magnus expansion formula;
  3. there exists a positive constant C, only dependent on A(1)T,,A(q)T,BT,T,d, with AT=AtFL(Ω×[0,T]), such that
    P(τt)Ct,t[0,T].

2.3. Magnus Expansion Formula

The general expansion formula for the stochastic Magnus expansion is given by:

Yt=n=0Yt(n)withYt(n):=r=0nYt(r,nr),

where the general term Y(r,nr) is an Itô-process of the form:

Yt(r,nr)=0tμsr,nrds+0tσsr,nr,jdWsj,nN0, r=0,,n,

The terms σr,nr,j,μr,nr are defined recursively as

Unknown environment 'align'

with

Unknown environment 'align'

and with the operators S being defined as

Unknown environment 'align'

3. Applications

Since the 1960s, the Magnus expansion has been successfully applied as a perturbative tool in numerous areas of physics and chemistry, from atomic and molecular physics to nuclear magnetic resonance[3] and quantum electrodynamics. It has been also used since 1998 as a tool to construct practical algorithms for the numerical integration of matrix linear differential equations. As they inherit from the Magnus expansion the preservation of qualitative traits of the problem, the corresponding schemes are prototypical examples of geometric numerical integrators.

References

  1. Kamm, Pagliarani & Pascucci 2020
  2. Kamm, Pagliarani & Pascucci 2020, Theorem 1.1
  3. Haeberlen, U.; Waugh, J.S. (1968). "Coherent Averaging Effects in Magnetic Resonance". Phys. Rev. 175 (2): 453–467. doi:10.1103/PhysRev.175.453. Bibcode: 1968PhRv..175..453H.  https://dx.doi.org/10.1103%2FPhysRev.175.453
More
Related Content
The logarithmic derivative has been shown to be a useful tool for data analysis in applied sciences because of either simplifying mathematical procedures or enabling an improved understanding and visualization of structural relationships and dynamic processes. In particular, spatial and temporal variations in signal amplitudes can be described independently of their sign by one and the same compact quantity, the inverse logarithmic derivative. In the special case of a single exponential decay function, this quantity becomes directly identical to the decay time constant. When generalized, the logarithmic derivative enables local gradients of system parameters to be flexibly described by using exponential behavior as a meaningful reference. It can be applied to complex maps of data containing multiple superimposed and alternating ramping or decay functions. Selected examples of experimental and simulated data from time-resolved plasma spectroscopy, multiphoton excitation, and spectroscopy are analyzed in detail, together with reminiscences of early activities in the field. The results demonstrate the capability of the approach to extract specific information on physical processes. Further emerging applications are addressed.
Keywords: spectroscopy; data analysis; logarithmic derivative; temporal decay; nonlinear optics; nonlinear order; multiphoton processes; pulsed lasers; signal processing; overlapping spectral lines
Recurrence Formula for the infinitesimal generator of differential equations are introduced by Y. Iwata (Chaos, Solitons & Fractals: X 13 (2024) 100119) in which the discussion is made in general abstract formalism. In this article, actual applications are discussed by assuming concrete differential equations. The recurrence formula is applied to partial differential equations, followed by application to ordinary differential equations. Consequently, concrete examples demonstrated in this article play complementary roles for understanding and using the recurrence formula.
Keywords: Riccati’s differential equation.; Miura transform; KdV equation; modified KdV equation; Nonlinearity
On April 18, 1955, the world lost one of its greatest intellectual giants. Albert Einstein, the Nobel Prize-winning physicist whose theories reshaped modern science, passed away in Princeton, New Jersey, at the age of 76. His death marked not only the conclusion of an extraordinary life but also a turning point in 20th-century scientific history. Though his voice was silenced, his ideas continue to resonate through the cosmos.
Keywords: Albert Einstein; physicist; Theory of Relativity; Mass-energy equivalence; Quantum Theory contributions
The synergy between Newcomb-Benford and Bayes' laws provides a universal framework for comprehending information, probability, conformality, and computational intelligence.
Keywords: Newcomb-Benford Law; harmt (harmonic unit of information); likelihood; Canonical PMF; Global-local duality; Bayesian Law; Secretary problem; Cross-ratio; Coding source; Conformability
Sergei V. Chekanov  (born 1969 in Minsk, Soviet Union then Belarus) is a Belarussian-American particle and experimental physicist. He published several books on computations and data analysis [1][2] . He obtained his master degree in theoretical physics in 1992 from the Belarusian State University (Minsk, Belarus) and entered a Ph.D. position at the Academy of Science of Belarus. His main Ph.D.
Keywords: physicists
Information
Subjects: Others
Contributor MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register :
View Times: 769
Entry Collection: HandWiki
Revision: 1 time (View History)
Update Date: 21 Oct 2022
Academic Video Service