eigenvalue problem
The general eigenvalue problem
Suppose we have a vector space and a linear operator . Then the eigenvalue problem is this:
For what values does the equation
have a nonzero solution ? For such a , what are all the solution vectors ?
Values admitting a solution are called eigenvalues; nonzero solutions are called eigenvectors.
The question may be rephrased as a question about the linear operator , where is the identity on . Since is invertible whenever is nonzero, one might expect that should be invertible for “most” . As usual, when dealing with infinite-dimensional spaces, the situation is more complicated.
A special sitation arises when has an inner product under which is self-adjoint. In this case, has a discrete set of eigenvalues, and if and are eigenvectors corresponding to distinct eigenvalues, then and are orthogonal. In fact, since the inner product makes into a normed linear space one can find an orthonormal basis for consisting entirely of eigenvectors of .
Differential eigenvalue problems
Many problems in physics and elsewhere lead to differential eigenvalue problems, that is, problems where the vector space is some space of differentiable functions and where the linear operator involves multiplication by functions and taking derivatives. Such problems arise from the method of separation of variables, for example. One class of eigenvalue problems that is well-studied are Sturm-Liouville problems, which always lead to self-adjoint operators. The sequences of eigenvectors obtained are therefore orthogonal under a suitable inner product.
An example of a Sturm-Liouville problem is this: Find a function satisfying
and
Observe that for most values of , there is only the solution . If for some , though, is a solution. Observe that if , then
Moreover, recalling the properties of Fourier series, we see that any function satisfying the boundary conditions can be written as an infinite linear combination of eigenvectors of this problem.
Many of the families of special functions that turn up throughout applied mathematics do so precisely because they are an orthogonal family of eigenvectors for a Sturm-Liouville problem. For example, the trigonometric functions sine and cosine and the Bessel functions both arise in this way.
Matrix eigenvalue problems
Matrix eigenvalue problems arise in a number of different situations. The eigenvalues of a matrix describe its behaviour in a coordinate-independent way; theorems about diagonalization allow computation of matrix powers efficiently, for example. As a result, matrix eigenvalues are useful in statistics, for example in analyzing Markov chains and in the fundamental theorem of demography.
Matrix eigenvalue problems also arise as the discretization of differential eigenvalue problems.
An example of where a matrix eigenvalue problem arises is the determination of the main axes of a second order surface (defined by a symmetric matrix ). The task is to find the places where the normal
is parallel to the vector , i.e .
A solution of the above equation with has the squared distance from the origin. Therefore, and . The main axes are .
The matrix eigenvalue problem can be written as . A non-trivial solution to this system of linear homogeneous equations exists if and only if the determinant
This th degree polynomial in is called the characteristic polynomial. Its roots are called the eigenvalues and the corresponding vectors eigenvectors. In the example, is a right eigenvector for ; a left eigenvector is defined by .
Numerical eigenvalue problems
Frequently, one wishes to solve the eigenvalue problem approximately (generally on a computer). While one can do this using generic matrix methods such as Gaussian elimination, factorization, and others, these have problems due to roundoff error when attempting to deal with eigenvalue problems. Other methods are necessary. For example, a -based method is a much more adequate tool ([Golub89]); it works as follows. Assume that is diagonalizable. The iteration is given by
for
end
At each step, the matrix is orthogonal and is upper triangular.
Note that
For a full matrix, the iteration requires flops per step. This is prohibitively expensive, so we first reduce to an upper Hessenberg matrix, , using an orthogonal similarity transformation:
( is upper Hessenberg if for ). We will use Householder transformations to achieve this. Note that if is symmetric then is symmetric, and hence tridiagonal.
The eigenvalues of are found by applying iteratively the decomposition to . These two matrices have the same eigenvalues as they are similar. In particular: is decomposed into , then an is computed, . is similar to because , and is decomposed to . Then is formed, , etc. In this way a sequence of ’s (with the same eigenvalues) is generated, that finally converges to (for conditions, see [Golub89])
for the Hessenberg and
for the tridiagonal.
References
- DAB
-
Originally from The Data Analysis Briefbook (http://rkb.home.cern.ch/rkb/titleA.htmlhttp://rkb.home.cern.ch/rkb/titleA.html)
- Golub89
-
Gene H. Golub and Charles F. van Loan: Matrix Computations, 2nd edn., The John Hopkins University Press, 1989.
Title | eigenvalue problem |
Canonical name | EigenvalueProblem |
Date of creation | 2013-03-22 12:11:30 |
Last modified on | 2013-03-22 12:11:30 |
Owner | archibal (4430) |
Last modified by | archibal (4430) |
Numerical id | 22 |
Author | archibal (4430) |
Entry type | Definition |
Classification | msc 65F15 |
Classification | msc 65-00 |
Classification | msc 15A18 |
Classification | msc 15-00 |
Related topic | Eigenvalue |
Related topic | Eigenvector |
Related topic | SimilarMatrix |
Related topic | SolvingTheWaveEquationByDBernoulli |
Related topic | TimeDependentExampleOfHeatEquation |