Magnus Expansion
Edit

In mathematics and physics, the Magnus expansion, named after Wilhelm Magnus (1907–1990), provides an exponential representation of the solution of a first-order homogeneous linear differential equation for a linear operator. In particular, it furnishes the fundamental matrix of a system of linear ordinary differential equations of order n with varying coefficients. The exponent is aggregated as an infinite series, whose terms involve multiple integrals and nested commutators.

fundamental matrix differential equation physics

1. The Deterministic Case

1.1. Magnus Approach and Its Interpretation

Given the n × n coefficient matrix A(t), one wishes to solve the initial-value problem associated with the linear ordinary differential equation

Y(t)=A(t)Y(t),Y(t0)=Y0

for the unknown n-dimensional vector function Y(t).

When n = 1, the solution simply reads

Y(t)=exp(t0tA(s)ds)Y0.

This is still valid for n > 1 if the matrix A(t) satisfies A(t1) A(t2) = A(t2) A(t1) for any pair of values of t, t1 and t2. In particular, this is the case if the matrix A is independent of t. In the general case, however, the expression above is no longer the solution of the problem.

The approach introduced by Magnus to solve the matrix initial-value problem is to express the solution by means of the exponential of a certain n × n matrix function Ω(t, t0):

Y(t)=exp(Ω(t,t0))Y0,

which is subsequently constructed as a series expansion:

Ω(t)=k=1Ωk(t),

where, for simplicity, it is customary to write Ω(t) for Ω(t, t0) and to take t0 = 0.

Magnus appreciated that, since d/dt (eΩ) e−Ω = A(t), using a Poincaré−Hausdorff matrix identity, he could relate the time derivative of Ω to the generating function of Bernoulli numbers and the adjoint endomorphism of Ω,

Undefined control sequence \operatorname

to solve for Ω recursively in terms of A "in a continuous analog of the CBH expansion", as outlined in a subsequent section.

The equation above constitutes the Magnus expansion, or Magnus series, for the solution of matrix linear initial-value problem. The first four terms of this series read

Unknown environment 'align'

where [A, B] ≡ A BB A is the matrix commutator of A and B.

These equations may be interpreted as follows: Ω1(t) coincides exactly with the exponent in the scalar (n = 1) case, but this equation cannot give the whole solution. If one insists in having an exponential representation (Lie group), the exponent needs to be corrected. The rest of the Magnus series provides that correction systematically: Ω or parts of it are in the Lie algebra of the Lie group on the solution.

In applications, one can rarely sum exactly the Magnus series, and one has to truncate it to get approximate solutions. The main advantage of the Magnus proposal is that the truncated series very often shares important qualitative properties with the exact solution, at variance with other conventional perturbation theories. For instance, in classical mechanics the symplectic character of the time evolution is preserved at every order of approximation. Similarly, the unitary character of the time evolution operator in quantum mechanics is also preserved (in contrast, e.g., to the Dyson series solving the same problem).

1.2. Convergence of the Expansion

From a mathematical point of view, the convergence problem is the following: given a certain matrix A(t), when can the exponent Ω(t) be obtained as the sum of the Magnus series?

A sufficient condition for this series to converge for t ∈ [0,T) is

0TA(s)2ds<π,

where 2 denotes a matrix norm. This result is generic in the sense that one may construct specific matrices A(t) for which the series diverges for any t > T.

1.3. Magnus Generator

A recursive procedure to generate all the terms in the Magnus expansion utilizes the matrices Sn(k) defined recursively through

Sn(j)=m=1nj[Ωm,Snm(j1)],2jn1,
Undefined control sequence \operatorname

which then furnish

Ω1=0tA(τ)dτ,
Ωn=j=1n1Bjj!0tSn(j)(τ)dτ,n2.

Here adkΩ is a shorthand for an iterated commutator (see adjoint endomorphism):

Undefined control sequence \operatorname

while Bj are the Bernoulli numbers with B1 = −1/2.

Finally, when this recursion is worked out explicitly, it is possible to express Ωn(t) as a linear combination of n-fold integrals of n − 1 nested commutators involving n matrices A:

Undefined control sequence \operatorname

which becomes increasingly intricate with n.

2. The Stochastic Case

2.1. Extension To Stochastic Ordinary Differential Equations

For the extension to the stochastic case let (Wt)t[0,T] be a Rq-dimensional Brownian motion, qN>0, on the probability space (Ω,F,P) with finite time horizon T>0 and natural filtration. Now, consider the linear matrix-valued stochastic Itô differential equation (with Einstein's summation convention over the index j)

dXt=BtXtdt+At(j)XtdWtj,X0=Id,dN>0,

where B,A(1),,A(j) are progressively measurable d×d-valued bounded stochastic processes and Id is the identity matrix. Following the same approach as in the deterministic case with alterations due to the stochastic setting[1] the corresponding matrix logarithm will turn out as an Itô-process, whose first two expansion orders are given by Yt(1)=Yt(1,0)+Yt(0,1) and Yt(2)=Yt(2,0)+Yt(1,1)+Yt(0,2), where with Einstein's summation convention over i and j

Unknown environment 'align'

2.2. Convergence of the Expansion

In the stochastic setting the convergence will now be subject to a stopping time τ and a first convergence result is given by:[2]

Under the previous assumption on the coefficients there exists a strong solution X=(Xt)t[0,T], as well as a strictly positive stopping time τT such that:

  1. Xt has a real logarithm Yt up to time τ, i.e.
    Xt=eYt,0t<τ;
  2. the following representation holds P-almost surely:
    Yt=n=0Yt(n),0t<τ,
    where Y(n) is the n-th term in the stochastic Magnus expansion as defined below in the subsection Magnus expansion formula;
  3. there exists a positive constant C, only dependent on A(1)T,,A(q)T,BT,T,d, with AT=AtFL(Ω×[0,T]), such that
    P(τt)Ct,t[0,T].

2.3. Magnus Expansion Formula

The general expansion formula for the stochastic Magnus expansion is given by:

Yt=n=0Yt(n)withYt(n):=r=0nYt(r,nr),

where the general term Y(r,nr) is an Itô-process of the form:

Yt(r,nr)=0tμsr,nrds+0tσsr,nr,jdWsj,nN0, r=0,,n,

The terms σr,nr,j,μr,nr are defined recursively as

Unknown environment 'align'

with

Unknown environment 'align'

and with the operators S being defined as

Unknown environment 'align'

3. Applications

Since the 1960s, the Magnus expansion has been successfully applied as a perturbative tool in numerous areas of physics and chemistry, from atomic and molecular physics to nuclear magnetic resonance[3] and quantum electrodynamics. It has been also used since 1998 as a tool to construct practical algorithms for the numerical integration of matrix linear differential equations. As they inherit from the Magnus expansion the preservation of qualitative traits of the problem, the corresponding schemes are prototypical examples of geometric numerical integrators.

References

  1. Kamm, Pagliarani & Pascucci 2020
  2. Kamm, Pagliarani & Pascucci 2020, Theorem 1.1
  3. Haeberlen, U.; Waugh, J.S. (1968). "Coherent Averaging Effects in Magnetic Resonance". Phys. Rev. 175 (2): 453–467. doi:10.1103/PhysRev.175.453. Bibcode: 1968PhRv..175..453H.  https://dx.doi.org/10.1103%2FPhysRev.175.453
More
Related Content
The increasing complexity of social science data and phenomena necessitates using advanced analytical techniques to capture nonlinear relationships that traditional linear models often overlook. This chapter explores the application of machine learning (ML) models in social science research, focusing on their ability to manage nonlinear interactions in multidimensional datasets. Nonlinear relationships are central to understanding social behaviors, socioeconomic factors, and psychological processes. Machine learning models, including decision trees, neural networks, random forests, and support vector machines, provide a flexible framework for capturing these intricate patterns. The chapter begins by examining the limitations of linear models and introduces essential machine learning techniques suited for nonlinear modeling. A discussion follows on how these models automatically detect interactions and threshold effects, offering superior predictive power and robustness against noise compared to traditional methods. The chapter also covers the practical challenges of model evaluation, validation, and handling imbalanced data, emphasizing cross-validation and performance metrics tailored to the nuances of social science datasets. Practical recommendations are offered to researchers, highlighting the balance between predictive accuracy and model interpretability, ethical considerations, and best practices for communicating results to diverse stakeholders. This chapter demonstrates that while machine learning models provide robust solutions for modeling nonlinear relationships, their successful application in social sciences requires careful attention to data quality, model selection, validation, and ethical considerations. Machine learning holds transformative potential for understanding complex social phenomena and informing data-driven psychology, sociology, and political science policy-making.
Keywords: machine learning in social sciences; nonlinear relationships; model interpretability; predictive analytics; imbalanced data handling
An article about the term "synchronicity" defined as the occurrence of meaningful coincidences that seem to have no cause.
Keywords: synchronicity; coincidences; Carl Jung
This—is the most influential thesis of the 20th century. But did you know? Its author was just 21 years old! Today, he’s known as the "Father of Information Theory." Meet Claude Shannon. Shannon revolutionized computing by applying Boolean algebra to electrical circuits, enabling them to process information using binary digits—1s and 0s. His groundbreaking 1937 thesis laid the foundation for digital computing and earned him lasting acclaim. During World War II, Shannon contributed to cryptography for the U.S. government, shaping the future of digital security. In 1943, he met Alan Turing at Bell Labs, sparking a legendary exchange of ideas. In 1948, Shannon published A Mathematical Theory of Communication, introducing the concept of information and the "bit"—the basic unit of data. His work transformed telecommunications, computing, and encryption, laying the groundwork for the digital age. From the internet to smartphones, today’s technology owes much to Shannon's visionary ideas.
Keywords: Claude Shannon; binary digits; A Mathematical Theory of Communication; bit
Assembly theory is a framework for quantifying selection, evolution, and complexity. It, therefore, spans various scientific disciplines, including physics, chemistry, biology, and information theory. Assembly theory is rooted in the assembly of an object from a set of basic building units, forming an initial assembly pool and from subunits that entered the assembly pool in previous assembly steps. Hence, the object is defined not as a set of point particles but by the history of its assembly, where the assembly index is the smallest number of steps required to assemble the object.
Keywords: assembly theory; complexity; origin of life; emergent dimensionality; mathematical physics
Rivularia vieillardii Bornet and Flahault. Artist: Unknown authorUnknown author. Title: Rivularia vieillardii Bornet and Flahault. Object type: Classification: 22328 Rivularia vieillardii Bornet & Flahault.
Keywords: bacteria; Rivularia
Information
Subjects: Others
Contributor MDPI registered users' name will be linked to their SciProfiles pages. To register with us, please refer to https://encyclopedia.pub/register :
View Times: 643
Entry Collection: HandWiki
Revision: 1 time (View History)
Update Date: 21 Oct 2022
Video Production Service