conjugate gradient algorithm
The conjugate gradient algorithm is used to solve the quadratic minimization problem:
$$\mathrm{min}\left(\frac{1}{2}{x}^{T}Qx{b}^{T}x\right)$$ 
or equivalently to solve the linear system $Qxb=0$, where $Q$ is a given $n$ by $n$ symmetric^{} positive definite matrix and $b$ is a given $n$ vector.
The algorithm^{} requires n iterations^{}, starting from an arbitrary initial guess ${x}_{0}$ (often ${x}_{0}=0$ is used). We will use the following notation:

•
$i$ — iteration number;

•
${x}_{i}$ — solution approximation;

•
${d}_{i}$ — search direction;

•
${r}_{i}$ — residual, which is defined as $bQ{x}_{i}$.
Algorithm

1.
Initialization. Let ${x}_{0}=0$ (or other starting point). Let ${r}_{0}={d}_{0}=Q{x}_{0}+b$. (The initial search direction is set to minus the gradient^{} of the quadratic function being minimized, evaluated at the starting point).

2.
For $i=0$ to $n1$ compute
$$\alpha =\frac{{r}_{i}^{T}{d}_{i}}{{d}_{i}^{T}Q{d}_{i}}$$ $${x}_{i+1}={x}_{i}+\alpha {d}_{i}$$ $${r}_{i+1}={r}_{i}\alpha Q{d}_{i}$$ If $$ or $i=n1$ Stop, the solution has been found. Otherwise, continue:
$$\beta =\frac{{r}_{i+1}^{T}Q{d}_{i}}{{d}_{i}^{T}Q{d}_{i}}$$ $${d}_{i+1}={r}_{i+1}+\beta {d}_{i}$$ (In each iteration the solution estimate is set to the previous estimate plus a multiple of the previous search direction. The next search direction is then set to the gradient plus a multiple of the previous search direction).
Discussion
The conjugate gradient method was developed in 1952 by Hestenes and Stiefel as an improvement to the steepest descent method. Whereas steepest descent approaches the solution asymptotically, the conjugate gradient method will find the solution in n iterations (assuming no roundoff error).
Why the name? The search directions ${d}_{i}$ are conjugate in the sense that ${d}_{j}^{T}Q{d}_{i}=0$ for $$. In addition these directions are computed from (but are not equal to) the gradient.
The conjugate gradient method has been generalized to the case where the function being minimized is only approximately quadratic. In that case the explicit formula for $\alpha $ given above is replaced by a linesearch procedure; this is a trial and error method in which various values of $\alpha $ are tried, and the value that leads to the smallest value of the objective is chosen. Well known generalized c.g. methods include the FletcherReeves method and the PolakRibiere method.
Example
Solve
$$\left(\begin{array}{cc}\hfill 1& \hfill 2\\ \hfill 2& \hfill 1\end{array}\right)x=\left(\begin{array}{cc}\hfill 3& \\ \hfill 0& \end{array}\right)$$ 
We have
$${x}_{0}=\left(\begin{array}{cc}\hfill 0& \\ \hfill 0& \end{array}\right)$$ 
then
$${x}_{1}=\left(\begin{array}{cc}\hfill 3& \\ \hfill 0& \end{array}\right)$$ 
and finally
$${x}_{2}=\left(\begin{array}{cc}\hfill 1& \\ \hfill 2& \end{array}\right)$$ 
which is the solution.
References
Luenberger: Introduction to Linear and Nonlinear Programming, AddisonWesley, 1973
Jonathan Richard Shewchuk: An Introduction to the Conjugate Gradient Method Without the Agonizing Pain, August 1994. \htmladdnormallinkhttp://www2.cs.cmu.edu/ jrs/jrspapers.html http://www2.cs.cmu.edu/ jrs/jrspapers.html [A detailed derivation of the method from first principles].
Press, et al.: Numerical Recipes in C, Cambridge University Press, 1995 [Chapter 10.6 contains an implementation of the generalized conjugate gradient method of Polak and Ribiere].
Title  conjugate gradient algorithm 

Canonical name  ConjugateGradientAlgorithm 
Date of creation  20130322 14:58:54 
Last modified on  20130322 14:58:54 
Owner  aplant (12431) 
Last modified by  aplant (12431) 
Numerical id  16 
Author  aplant (12431) 
Entry type  Algorithm 
Classification  msc 15A06 
Classification  msc 90C20 
Synonym  method of conjugate gradients 