eigenvalue problem


The general eigenvalue problem

Suppose we have a vector spaceMathworldPlanetmath V and a linear operator AEnd(V). Then the eigenvalue problem is this:

For what values λ does the equation

Ax=λx

have a nonzero solution x? For such a λ, what are all the solution vectors x?

Values λ admitting a solution are called eigenvaluesMathworldPlanetmathPlanetmathPlanetmathPlanetmath; nonzero solutions x are called eigenvectorsMathworldPlanetmathPlanetmathPlanetmath.

The question may be rephrased as a question about the linear operator (A-λI), where I is the identityPlanetmathPlanetmathPlanetmath on V. Since λI is invertiblePlanetmathPlanetmathPlanetmath whenever λ is nonzero, one might expect that (A-λI) should be invertible for “most” λ. As usual, when dealing with infinite-dimensional spaces, the situation is more complicated.

A special sitation arises when V has an inner product under which A is self-adjointMathworldPlanetmathPlanetmath. In this case, A has a discrete set of eigenvalues, and if xλ1 and xλ2 are eigenvectors corresponding to distinct eigenvalues, then xλ1 and xλ2 are orthogonalMathworldPlanetmathPlanetmath. In fact, since the inner product makes V into a normed linear space one can find an orthonormal basis for V consisting entirely of eigenvectors of A.

Differential eigenvalue problems

Many problems in physics and elsewhere lead to differentialMathworldPlanetmath eigenvalue problems, that is, problems where the vector space is some space of differentiable functions and where the linear operator involves multiplicationPlanetmathPlanetmath by functionsMathworldPlanetmath and taking derivativesPlanetmathPlanetmath. Such problems arise from the method of separation of variablesMathworldPlanetmath, for example. One class of eigenvalue problems that is well-studied are Sturm-Liouville problems, which always lead to self-adjoint operators. The sequencesMathworldPlanetmath of eigenvectors obtained are therefore orthogonal under a suitable inner product.

An example of a Sturm-Liouville problem is this: Find a function f(x) satisfying

f′′(x)=-λf(x)

and

f(0)=f(1)=0.

Observe that for most values of λ, there is only the solution f(x)=0. If λ=(nπ)2 for some n, though, sin(λx) is a solution. Observe that if nm, then

01sin(nπx)sin(mπx)𝑑x=0.

Moreover, recalling the properties of Fourier series, we see that any function satisfying the boundary conditions can be written as an infiniteMathworldPlanetmath linear combinationMathworldPlanetmath of eigenvectors of this problem.

Many of the families of special functions that turn up throughout applied mathematics do so precisely because they are an orthogonal family of eigenvectors for a Sturm-Liouville problem. For example, the trigonometric functionsDlmfMathworldPlanetmath sine and cosine and the Bessel functionsDlmfMathworldPlanetmathPlanetmath both arise in this way.

Matrix eigenvalue problems

Matrix eigenvalue problems arise in a number of different situations. The eigenvalues of a matrix describe its behaviour in a coordinate-independent way; theoremsMathworldPlanetmath about diagonalization allow computation of matrix powers efficiently, for example. As a result, matrix eigenvalues are useful in statistics, for example in analyzing Markov chains and in the fundamental theorem of demography.

Matrix eigenvalue problems also arise as the discretization of differential eigenvalue problems.

An example of where a matrix eigenvalue problem arises is the determination of the main axes of a second orderPlanetmathPlanetmath surface Q=xTAx=1 (defined by a symmetric matrixMathworldPlanetmath A). The task is to find the places where the normal

(Q)=(Qx1,,Qxn)=2Ax

is parallel to the vector x, i.e Ax=λx.

A solution x of the above equation with xTAx=1 has the squared distance xTx=d2 from the origin. Therefore, λxTx=1 and d2=1/λ. The main axes are ai=1/λi(i=1,,n).

The matrix eigenvalue problem can be written as (A-λI)x=0. A non-trivial solution to this system of n linear homogeneous equations exists if and only if the determinantDlmfMathworldPlanetmath

det(A-λI)=|a11-λa12a1na21a22-λa2nan1an2ann-λ|=0

This nth degree polynomialPlanetmathPlanetmath in λ is called the characteristic polynomialPlanetmathPlanetmath. Its roots λ are called the eigenvalues and the corresponding vectors x eigenvectors. In the example, x is a right eigenvector for λ; a left eigenvector y is defined by yTA=μyT.

Numerical eigenvalue problems

Frequently, one wishes to solve the eigenvalue problem approximately (generally on a computer). While one can do this using generic matrix methods such as Gaussian eliminationMathworldPlanetmath, LU factorization, and others, these have problems due to roundoff error when attempting to deal with eigenvalue problems. Other methods are necessary. For example, a QR-based method is a much more adequate tool ([Golub89]); it works as follows. Assume that An×n is diagonalizable. The QR iteration is given by

A0=A
for k=1,2,
Ak=:QkRk
Ak+1:=RkQk
end

At each step, the matrix Qk is orthogonal and Rk is upper triangular.

Note that

Ak+1=(Q0Qk)TAQ0Qk.

For a full matrix, the QR iteration requires O(n3) flops per step. This is prohibitively expensive, so we first reduce A to an upper Hessenberg matrix, H, using an orthogonal similarity transformation:

UTAU=H

(H is upper Hessenberg if hij=0 for i>j+1). We will use Householder transformations to achieve this. Note that if A is symmetricPlanetmathPlanetmathPlanetmath then H is symmetric, and hence tridiagonal.

The eigenvalues of A are found by applying iteratively the QR decomposition to H. These two matrices have the same eigenvalues as they are similar. In particular: H=H1 is decomposed into H1=Q1R1, then an H2 is computed, H2=R1Q1. H2 is similar to H1 because H2=R1Q1=Q1-1H1Q1, and is decomposed to H2=Q2R2. Then H3 is formed, H3=R2Q2, etc. In this way a sequence of Hi’s (with the same eigenvalues) is generated, that finally converges to (for conditions, see [Golub89])

(λ1****0λ2***00λ3**000λn-1*0000λn)

for the Hessenberg and

(λ100000λ200000λ300000λn-100000λn)

for the tridiagonal.

References

DAB

Originally from The Data Analysis Briefbook (http://rkb.home.cern.ch/rkb/titleA.htmlhttp://rkb.home.cern.ch/rkb/titleA.html)

Golub89

Gene H. Golub and Charles F. van Loan: Matrix Computations, 2nd edn., The John Hopkins University Press, 1989.

Title eigenvalue problem
Canonical name EigenvalueProblem
Date of creation 2013-03-22 12:11:30
Last modified on 2013-03-22 12:11:30
Owner archibal (4430)
Last modified by archibal (4430)
Numerical id 22
Author archibal (4430)
Entry type Definition
Classification msc 65F15
Classification msc 65-00
Classification msc 15A18
Classification msc 15-00
Related topic Eigenvalue
Related topic Eigenvector
Related topic SimilarMatrix
Related topic SolvingTheWaveEquationByDBernoulli
Related topic TimeDependentExampleOfHeatEquation