eigenvalue problem
The general eigenvalue problem
Suppose we have a vector space^{} $V$ and a linear operator $A\in \mathrm{End}(V)$. Then the eigenvalue problem is this:
For what values $\lambda $ does the equation
$$Ax=\lambda x$$ 
have a nonzero solution $x$? For such a $\lambda $, what are all the solution vectors $x$?
Values $\lambda $ admitting a solution are called eigenvalues^{}; nonzero solutions $x$ are called eigenvectors^{}.
The question may be rephrased as a question about the linear operator $(A\lambda I)$, where $I$ is the identity^{} on $V$. Since $\lambda I$ is invertible^{} whenever $\lambda $ is nonzero, one might expect that $(A\lambda I)$ should be invertible for “most” $\lambda $. As usual, when dealing with infinitedimensional spaces, the situation is more complicated.
A special sitation arises when $V$ has an inner product under which $A$ is selfadjoint^{}. In this case, $A$ has a discrete set of eigenvalues, and if ${x}_{{\lambda}_{1}}$ and ${x}_{{\lambda}_{2}}$ are eigenvectors corresponding to distinct eigenvalues, then ${x}_{{\lambda}_{1}}$ and ${x}_{{\lambda}_{2}}$ are orthogonal^{}. In fact, since the inner product makes $V$ into a normed linear space one can find an orthonormal basis for $V$ consisting entirely of eigenvectors of $A$.
Differential eigenvalue problems
Many problems in physics and elsewhere lead to differential^{} eigenvalue problems, that is, problems where the vector space is some space of differentiable functions and where the linear operator involves multiplication^{} by functions^{} and taking derivatives^{}. Such problems arise from the method of separation of variables^{}, for example. One class of eigenvalue problems that is wellstudied are SturmLiouville problems, which always lead to selfadjoint operators. The sequences^{} of eigenvectors obtained are therefore orthogonal under a suitable inner product.
An example of a SturmLiouville problem is this: Find a function $f(x)$ satisfying
$${f}^{\prime \prime}(x)=\lambda f(x)$$ 
and
$$f(0)=f(1)=0.$$ 
Observe that for most values of $\lambda $, there is only the solution $f(x)=0$. If $\lambda ={(n\pi )}^{2}$ for some $n$, though, $\mathrm{sin}(\sqrt{\lambda}x)$ is a solution. Observe that if $n\ne m$, then
$${\int}_{0}^{1}\mathrm{sin}(n\pi x)\mathrm{sin}(m\pi x)\mathit{d}x=0.$$ 
Moreover, recalling the properties of Fourier series, we see that any function satisfying the boundary conditions can be written as an infinite^{} linear combination^{} of eigenvectors of this problem.
Many of the families of special functions that turn up throughout applied mathematics do so precisely because they are an orthogonal family of eigenvectors for a SturmLiouville problem. For example, the trigonometric functions^{} sine and cosine and the Bessel functions^{} both arise in this way.
Matrix eigenvalue problems
Matrix eigenvalue problems arise in a number of different situations. The eigenvalues of a matrix describe its behaviour in a coordinateindependent way; theorems^{} about diagonalization allow computation of matrix powers efficiently, for example. As a result, matrix eigenvalues are useful in statistics, for example in analyzing Markov chains and in the fundamental theorem of demography.
Matrix eigenvalue problems also arise as the discretization of differential eigenvalue problems.
An example of where a matrix eigenvalue problem arises is the determination of the main axes of a second order^{} surface $Q={x}^{T}Ax=1$ (defined by a symmetric matrix^{} $A$). The task is to find the places where the normal
$$\nabla (Q)=(\frac{\partial Q}{\partial {x}_{1}},\mathrm{\cdots},\frac{\partial Q}{\partial {x}_{n}})=2Ax$$ 
is parallel to the vector $x$, i.e $Ax=\lambda x$.
A solution $x$ of the above equation with ${x}^{T}Ax=1$ has the squared distance ${x}^{T}x={d}^{2}$ from the origin. Therefore, $\lambda {x}^{T}x=1$ and ${d}^{2}=1/\lambda $. The main axes are ${a}_{i}=1/\sqrt{{\lambda}_{i}}\mathit{\hspace{1em}}(i=1,\mathrm{\dots},n)$.
The matrix eigenvalue problem can be written as $(A\lambda I)x=0$. A nontrivial solution to this system of $n$ linear homogeneous equations exists if and only if the determinant^{}
$$det(A\lambda I)=\left\begin{array}{cccc}\hfill {a}_{11}\lambda \hfill & \hfill {a}_{12}\hfill & \hfill \mathrm{\cdots}\hfill & \hfill {a}_{1n}\hfill \\ \hfill {a}_{21}\hfill & \hfill {a}_{22}\lambda \hfill & \hfill \mathrm{\cdots}\hfill & \hfill {a}_{2n}\hfill \\ \hfill \mathrm{\vdots}\hfill & \hfill \mathrm{\vdots}\hfill & \hfill \mathrm{\ddots}\hfill & \hfill \mathrm{\vdots}\hfill \\ \hfill {a}_{n1}\hfill & \hfill {a}_{n2}\hfill & \hfill \mathrm{\cdots}\hfill & \hfill {a}_{nn}\lambda \hfill \end{array}\right=0$$ 
This $n$th degree polynomial^{} in $\lambda $ is called the characteristic polynomial^{}. Its roots $\lambda $ are called the eigenvalues and the corresponding vectors $x$ eigenvectors. In the example, $x$ is a right eigenvector for $\lambda $; a left eigenvector $y$ is defined by ${y}^{T}A=\mu {y}^{T}$.
Numerical eigenvalue problems
Frequently, one wishes to solve the eigenvalue problem approximately (generally on a computer). While one can do this using generic matrix methods such as Gaussian elimination^{}, $LU$ factorization, and others, these have problems due to roundoff error when attempting to deal with eigenvalue problems. Other methods are necessary. For example, a $QR$based method is a much more adequate tool ([Golub89]); it works as follows. Assume that $A\in {\mathbb{R}}^{n\times n}$ is diagonalizable. The $QR$ iteration is given by
${A}_{0}=A$
for $k=1,2,\mathrm{\dots}$
${A}_{k}=:{Q}_{k}{R}_{k}$
${A}_{k+1}:={R}_{k}{Q}_{k}$
end
At each step, the matrix ${Q}_{k}$ is orthogonal and ${R}_{k}$ is upper triangular.
Note that
$${A}_{k+1}={({Q}_{0}\mathrm{\cdots}{Q}_{k})}^{\mathrm{T}}A{Q}_{0}\mathrm{\cdots}{Q}_{k}.$$ 
For a full matrix, the $QR$ iteration requires $O({n}^{3})$ flops per step. This is prohibitively expensive, so we first reduce $A$ to an upper Hessenberg matrix, $H$, using an orthogonal similarity transformation:
$${U}^{\mathrm{T}}AU=H$$ 
($H$ is upper Hessenberg if ${h}_{ij}=0$ for $i>j+1$). We will use Householder transformations to achieve this. Note that if $A$ is symmetric^{} then $H$ is symmetric, and hence tridiagonal.
The eigenvalues of $A$ are found by applying iteratively the $QR$ decomposition to $H$. These two matrices have the same eigenvalues as they are similar. In particular: $H={H}_{1}$ is decomposed into ${H}_{1}={Q}_{1}{R}_{1}$, then an ${H}_{2}$ is computed, ${H}_{2}={R}_{1}{Q}_{1}$. ${H}_{2}$ is similar to ${H}_{1}$ because ${H}_{2}={R}_{1}{Q}_{1}={Q}_{1}^{1}{H}_{1}{Q}_{1}$, and is decomposed to ${H}_{2}={Q}_{2}{R}_{2}$. Then ${H}_{3}$ is formed, ${H}_{3}={R}_{2}{Q}_{2}$, etc. In this way a sequence of ${H}_{i}$’s (with the same eigenvalues) is generated, that finally converges to (for conditions, see [Golub89])
$$\left(\begin{array}{cccccc}\hfill {\lambda}_{1}\hfill & \hfill *\hfill & \hfill *\hfill & \hfill \mathrm{\cdots}\hfill & \hfill *\hfill & \hfill *\hfill \\ \hfill 0\hfill & \hfill {\lambda}_{2}\hfill & \hfill *\hfill & \hfill \mathrm{\cdots}\hfill & \hfill *\hfill & \hfill *\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill {\lambda}_{3}\hfill & \hfill \mathrm{\cdots}\hfill & \hfill *\hfill & \hfill *\hfill \\ \hfill \mathrm{\vdots}\hfill & \hfill \mathrm{\vdots}\hfill & \hfill \mathrm{\vdots}\hfill & \hfill \mathrm{\ddots}\hfill & \hfill \mathrm{\vdots}\hfill & \hfill \mathrm{\vdots}\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill \mathrm{\cdots}\hfill & \hfill {\lambda}_{n1}\hfill & \hfill *\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill \mathrm{\cdots}\hfill & \hfill 0\hfill & \hfill {\lambda}_{n}\hfill \end{array}\right)$$ 
for the Hessenberg and
$$\left(\begin{array}{cccccc}\hfill {\lambda}_{1}\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill \mathrm{\cdots}\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill {\lambda}_{2}\hfill & \hfill 0\hfill & \hfill \mathrm{\cdots}\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill {\lambda}_{3}\hfill & \hfill \mathrm{\cdots}\hfill & \hfill 0\hfill & \hfill 0\hfill \\ \hfill \mathrm{\vdots}\hfill & \hfill \mathrm{\vdots}\hfill & \hfill \mathrm{\vdots}\hfill & \hfill \mathrm{\ddots}\hfill & \hfill \mathrm{\vdots}\hfill & \hfill \mathrm{\vdots}\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill \mathrm{\cdots}\hfill & \hfill {\lambda}_{n1}\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill 0\hfill & \hfill 0\hfill & \hfill \mathrm{\cdots}\hfill & \hfill 0\hfill & \hfill {\lambda}_{n}\hfill \end{array}\right)$$ 
for the tridiagonal.
References
 DAB

Originally from The Data Analysis Briefbook (http://rkb.home.cern.ch/rkb/titleA.htmlhttp://rkb.home.cern.ch/rkb/titleA.html)
 Golub89

Gene H. Golub and Charles F. van Loan: Matrix Computations, 2nd edn., The John Hopkins University Press, 1989.
Title  eigenvalue problem 
Canonical name  EigenvalueProblem 
Date of creation  20130322 12:11:30 
Last modified on  20130322 12:11:30 
Owner  archibal (4430) 
Last modified by  archibal (4430) 
Numerical id  22 
Author  archibal (4430) 
Entry type  Definition 
Classification  msc 65F15 
Classification  msc 6500 
Classification  msc 15A18 
Classification  msc 1500 
Related topic  Eigenvalue 
Related topic  Eigenvector 
Related topic  SimilarMatrix 
Related topic  SolvingTheWaveEquationByDBernoulli 
Related topic  TimeDependentExampleOfHeatEquation 