Matrix function

In mathematics, a matrix function is a function which maps a matrix to another matrix.

Extending scalar function to matrix functions

There are several techniques for lifting a real function to a square matrix function such that interesting properties are maintained. All of the following techniques yield the same matrix function, but the domains on which the function are defined may differ.

Power series

If the real function f has the Taylor expansion

then a matrix function can be defined by substituting x by a matrix: the powers become matrix powers, the additions become matrix sums and the multiplications become scaling operations. If the real series converges for , then the corresponding matrix series will converge for matrix argument A if for some matrix norm which satisfies .

Diagonalizable matrices

If the matrix A is diagonalizable, the problem may be reduced to an array of the function on each eigenvalue. This is to say we can find a matrix P and a diagonal matrix D such that . Applying the power series definition to this decomposition, we find that f(A) is defined by

where denote the diagonal entries of D.

For example, suppose one is seeking A! = Γ(A+1) for

One has

for

Application of the formula then simply yields

Likewise,

Jordan decomposition

Main article: Jordan normal form

All matrices, whether they are diagonalizable or not, have a Jordan normal form , where the matrix J consists of Jordan blocks. Consider these blocks separately and apply the power series to a Jordan block:

This definition can be used to extend the domain of the matrix function beyond the set of matrices with spectral radius smaller than the radius of convergence of the power series. Note that there is also a connection to divided differences.

A related notion is the Jordan–Chevalley decomposition which expresses a matrix as a sum of a diagonalizable and a nilpotent part.

Hermitian matrices

A Hermitian matrix has all real eigenvalues and can always be diagonalized by a unitary matrix P, according to the spectral theorem. In this case, the Jordan definition is natural. Moreover, this definition allows one to extend standard inequalities for real functions:

If for all eigenvalues of , then . (As a convention, is a positive-semidefinite matrix.) The proof follows directly from the definition.

Cauchy integral

Cauchy's integral formula from complex analysis can also be used to generalize scalar functions to matrix functions. Cauchy's integral formula states that for any analytic function f defined on a set D ⊂ ℂ, one has

where C is a closed curve inside the domain D enclosing x.

Now, replace x by a matrix A and consider a path C inside D that encloses all eigenvalues of A. One possibility to achieve this is to let C be a circle around the origin with radius larger than ‖A‖ for an arbitrary matrix norm ‖•‖. Then, f(A) is definable by

This integral can readily be evaluated numerically using the trapezium rule, which converges exponentially in this case. That means that the precision of the result doubles when the number of nodes is doubled. In routine cases, this is bypassed by Sylvester's formula.

This idea applied to bounded linear operators on a Banach space, which can be seen as infinite matrices, leads to the holomorphic functional calculus.

Matrix perturbations

The above Taylor power series allows the scalar to be replaced by the matrix. This is not true in general when expanding in terms of about unless . A counter example is , which has a finite length Taylor series. We compute this in two ways,

The scalar expression assumes commutativity while the matrix expression does not and thus they cannot be equated directly unless . For some f(x) this can be dealt with using the same method as scalar Taylor series. For example, . If exists then . The expansion of the first term then follows the power series given above,

The convergence criteria of the power series then apply, requiring to be sufficiently small under the appropriate matrix norm. For more general problems, which cannot be rewritten in such a way that the two matrices commute, the ordering of matrix products produced by repeated application of the Leibniz rule must be tracked.

Examples

Classes of matrix functions

Using the semidefinite ordering ( is positive-semidefinite and is positive definite), some of the classes of scalar functions can be extended to matrix functions of Hermitian matrices.[1]

Operator monotone

A function is called operator monotone if and only if

for all self-adjoint matrices with spectra in the domain of f. This is analogous to monotone function in the scalar case.

Operator concave/convex

A function is called operator concave if and only if

for all self-adjoint matrices with spectra in the domain of f and . This definition is analogous to a concave scalar function. An operator convex function can be defined be switching to in the definition above.

Examples

The matrix log is both operator monotone and operator concave. The matrix square is operator convex. The matrix exponential is none of these. Loewner's Theorem states that a function on an open interval is operator monotone if and only if it has an analytic extension to the upper and lower complex half planes so that the upper half plane is mapped to itself.[1]

See also

Notes

  1. 1 2 Bhatia, R. (1997). Matrix Analysis. Graduate Texts in Mathematics. 169. Springer.

References

This article is issued from Wikipedia - version of the 10/23/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.