The general eigenvalue problem
For what values does the equation
have a nonzero solution ? For such a , what are all the solution vectors ?
The question may be rephrased as a question about the linear operator , where is the identity on . Since is invertible whenever is nonzero, one might expect that should be invertible for “most” . As usual, when dealing with infinite-dimensional spaces, the situation is more complicated.
A special sitation arises when has an inner product under which is self-adjoint. In this case, has a discrete set of eigenvalues, and if and are eigenvectors corresponding to distinct eigenvalues, then and are orthogonal. In fact, since the inner product makes into a normed linear space one can find an orthonormal basis for consisting entirely of eigenvectors of .
Differential eigenvalue problems
Many problems in physics and elsewhere lead to differential eigenvalue problems, that is, problems where the vector space is some space of differentiable functions and where the linear operator involves multiplication by functions and taking derivatives. Such problems arise from the method of separation of variables, for example. One class of eigenvalue problems that is well-studied are Sturm-Liouville problems, which always lead to self-adjoint operators. The sequences of eigenvectors obtained are therefore orthogonal under a suitable inner product.
An example of a Sturm-Liouville problem is this: Find a function satisfying
Observe that for most values of , there is only the solution . If for some , though, is a solution. Observe that if , then
Matrix eigenvalue problems
Matrix eigenvalue problems arise in a number of different situations. The eigenvalues of a matrix describe its behaviour in a coordinate-independent way; theorems about diagonalization allow computation of matrix powers efficiently, for example. As a result, matrix eigenvalues are useful in statistics, for example in analyzing Markov chains and in the fundamental theorem of demography.
Matrix eigenvalue problems also arise as the discretization of differential eigenvalue problems.
An example of where a matrix eigenvalue problem arises is the determination of the main axes of a second order surface (defined by a symmetric matrix ). The task is to find the places where the normal
is parallel to the vector , i.e .
A solution of the above equation with has the squared distance from the origin. Therefore, and . The main axes are .
Numerical eigenvalue problems
Frequently, one wishes to solve the eigenvalue problem approximately (generally on a computer). While one can do this using generic matrix methods such as Gaussian elimination, factorization, and others, these have problems due to roundoff error when attempting to deal with eigenvalue problems. Other methods are necessary. For example, a -based method is a much more adequate tool ([Golub89]); it works as follows. Assume that is diagonalizable. The iteration is given by
At each step, the matrix is orthogonal and is upper triangular.
The eigenvalues of are found by applying iteratively the decomposition to . These two matrices have the same eigenvalues as they are similar. In particular: is decomposed into , then an is computed, . is similar to because , and is decomposed to . Then is formed, , etc. In this way a sequence of ’s (with the same eigenvalues) is generated, that finally converges to (for conditions, see [Golub89])
for the Hessenberg and
for the tridiagonal.
Originally from The Data Analysis Briefbook (http://rkb.home.cern.ch/rkb/titleA.htmlhttp://rkb.home.cern.ch/rkb/titleA.html)
Gene H. Golub and Charles F. van Loan: Matrix Computations, 2nd edn., The John Hopkins University Press, 1989.
|Date of creation||2013-03-22 12:11:30|
|Last modified on||2013-03-22 12:11:30|
|Last modified by||archibal (4430)|