decomposition of orthogonal operators as rotations and reflections
Theorem 1.
Let $V$ be a $n$dimensional real inner product space^{} ($$). Then every orthogonal^{} operator $T$ on $V$ can be decomposed into a series of twodimensional rotations^{} and onedimensional reflections on mutually orthogonal subspaces of $V$.
We first explain the general idea behind the proof. Consider a rotation $R$ of angle $\theta $ in a twodimensional space. From its orthonormal basis^{} representation
$$\left[\begin{array}{cc}\hfill \mathrm{cos}\theta \hfill & \hfill \mathrm{sin}\theta \hfill \\ \hfill \mathrm{sin}\theta \hfill & \hfill \mathrm{cos}\theta \hfill \end{array}\right],$$ 
we find that the characteristic polynomial^{} of $R$ is ${t}^{2}2\mathrm{cos}\theta +1$, with (complex) roots ${e}^{\pm i\theta}$. (Not surprising, since multiplication^{} by ${e}^{i\theta}$ in the complex plane is rotation by $\theta $.) Thus given the characteristic polynomial of $R$, we can almost recover^{1}^{1}Because the complex roots occur in conjugate^{} pairs, the information about the sign of $\theta $ is lost. This too, is not a surprise, because the sign of $\theta $, i.e. whether the rotation is clockwise or counterclockwise, is dependent on the orientation of the basis vectors. its rotation angle.
In the case of a reflection $S$, the eigenvalues^{} of $S$ are $1$ and $1$; so again, the characteristic polynomial of $S$ will provide some information on $S$.
So in $n$ dimensions^{}, we are also going to look at the complex eigenvalues and eigenvectors of $T$ to recover information about the rotations represented by $T$.
But there is one technical point — $T$ is a transformation on a real vector space, so it does not really have “complex eigenvalues and eigenvectors”. To make this concept rigorous, we must consider the complexification ${T}^{\u2102}$ of $T$, the operator defined by ${T}^{\u2102}(x+iy)=Tx+iTy$ in the vector space^{} ${V}^{\u2102}$ consisting of elements of the form $x+iy$, for $x,y\in V$. (For more details, see the entry on complexification (http://planetmath.org/ComplexificationOfVectorSpace).)
Lemma 1.
For any linear operator $T\mathrm{:}V\mathrm{\to}V$, there exists a one or twodimensional subspace^{} $W$ which is invariant^{} under $T$.
Proof.
Consider ${T}^{\u2102}$ and its characteristic polynomial. By the Fundamental Theorem of Algebra, the characteristic polynomial has a complex root $\lambda =\alpha +i\beta $. Then there is an eigenvector^{} $x+iy\ne 0$ with eigenvalue $\lambda $. We have
$$Tx+iTy={T}^{\u2102}(x+iy)=\lambda (x+iy)=(\alpha x\beta y)+i(\beta x+\alpha y).$$ 
Equating real and imaginary^{} components^{}, we see that
$Tx$  $=\alpha x\beta y\in W$  
$Ty$  $=\beta x+\alpha y\in W$ 
where $W=\mathrm{span}\{x,y\}$. $W$ is twodimensional if $x$ and $y$ are linearly independent^{}; otherwise it is onedimensional^{2}^{2} In fact, the space $W$ constructed is twodimensional if and only if the eigenvalue $\lambda $ is not purely real. Compare with the remark (i) after the proof of Theorem^{} 1. Actually, Lemma 1 has more uses than just proving Theorem 1. For example, if $\dot{x}=Ax$ is a linear differential equation, and the constant coefficient matrix $A$ has only simple eigenvalues, then it is a consequence of Lemma 1, that the differential equation decomposes into a series of disjoint onevariable and twovariable equations. The solutions are then readily understood: they are always of the form of an exponential multiplied by a sinusoid, and linear combinations^{} thereof. The sinusoids will be present whenever there are nonreal eigenvalues. . And we have $T(W)\subseteq W$ as claimed. ∎
Proof of Theorem 1.
We recursively factor $T$; formally, the proof will be by induction^{} on the dimension $n$.
The case $n=1$ is trivial. We have $detT=\pm 1$; if $detT=1$ then $T$ is the identity^{}; otherwise $Tx=x$ is a reflection.
For larger $n$, by Lemma 1 there exists a $T$invariant subspace^{} $W$. The orthogonal complement^{} ${W}^{\u27c2}$ of $W$ is $T$invariant also, because for all $x\in {W}^{\u27c2}$ and $y\in W$,
$\u27e8Tx,y\u27e9$  $=\u27e8x,{T}^{1}y\u27e9$  because $T$ preserves inner product^{}  
$=0$  because ${T}^{1}(W)=W$. 
Let ${T}_{W}$ be the operator that acts as $T$ on $W$ and is the identity on ${W}^{\u27c2}$. Similarly, let ${T}_{{W}^{\u27c2}}$ be the operator that acts as $T$ on ${W}^{\u27c2}$ and is the identity on $W$. Then $T={T}_{W}\circ {T}_{{W}^{\u27c2}}$. ${T}_{W}$ restricted to $W$ is orthogonal, and since $W$ is one or twodimensional, ${T}_{W}$ must therefore be a rotation or reflection (or the identity) in a line or plane.
${T}_{{W}^{\u27c2}}$ restricted to ${W}^{\u27c2}$ is also orthogonal. ${W}^{\u27c2}$ has dimension $$, so by the induction hypothesis we can continue to factor it into operators acting on subspaces of ${W}^{\u27c2}$ that are mutually orthogonal. These subspaces will of course also be orthogonal to $W$.
The proof is now complete^{}, except that we did not rule out ${T}_{W}$ being a reflection even when $W$ is twodimensional. But of course, if ${T}_{W}$ is a reflection in two dimensions, then it can be factored as a reflection on a onedimensional subspace $\mathrm{span}\{v\}$ composed with the identity on $\mathrm{span}{\{v\}}^{\u27c2}$. ∎
Actually, in the third paragraph of the proof, we implicitly assumed that every orthogonal operator on twodimensional is either a rotation or reflection. This is a wellknown fact, but it does need to be formally proven:
Theorem 2.
If $V$ is a twodimensional real inner product space, then every orthogonal operator $T$ is either a rotation or reflection.
Proof.
Fix any orthonormal basis $\{{e}_{1},{e}_{2}\}$ for $V$. Since $T$ is orthogonal, $\parallel T{e}_{1}\parallel =\parallel {e}_{1}\parallel =1$, i.e. $T{e}_{1}$ is a unit vector^{} on the plane, so there exists an angle $\theta $ (unique modulo $2\pi $) such that $T{e}_{1}=\mathrm{cos}\theta {e}_{1}+\mathrm{sin}\theta {e}_{2}$. Similarly $T{e}_{2}$ is a unit vector, but since ${e}_{1}$ and ${e}_{2}$ are orthogonal, so are $T{e}_{1}$ and $T{e}_{2}$. Then it is found that the solution to $T{e}_{2}$ must be either $\mathrm{sin}\theta {e}_{1}+\mathrm{cos}\theta {e}_{2}$ or $\mathrm{sin}\theta {e}_{1}\mathrm{cos}\theta {e}_{2}$. Putting all this together, the matrix for $T$ is:
$$\left[\begin{array}{cc}\hfill \mathrm{cos}\theta \hfill & \hfill \mp \mathrm{sin}\theta \hfill \\ \hfill \mathrm{sin}\theta \hfill & \hfill \pm \mathrm{cos}\theta \hfill \end{array}\right].$$ 
The first solution for $T{e}_{2}$ corresponds to a rotation matrix^{} (and $detT=1$); the second solution for $T{e}_{2}$ corresponds to a reflection matrix (http://planetmath.org/DerivationOf2DReflectionMatrix) (and $detT=1$). ∎
1 Remarks

i.
Observe that the two equations for $Tx$ and $Ty$ appearing in the proof of Lemma 1 do specify a rotation of angle $\pm \theta $ when $\lambda =\alpha +i\beta ={e}^{i\theta}$ and $x,y$ are orthonormal. So by examining the complex eigenvales and eigenvectors of ${T}^{\u2102}$, we can reconstruct the rotation.
This construction can be used to give an alternate proof of Theorem 1; we sketch it below:
The complexified space ${V}^{\u2102}$ has an inner product structure^{} inherited from $V$ (again, see complexification (http://planetmath.org/ComplexificationOfVectorSpace) for details). Let $U={T}^{\u2102}$. Since $T$ is orthogonal, $U$ is unitary, and hence normal (${U}^{*}U=U{U}^{*}$). There exists an orthonormal basis of eigenvectors for $U$ (the Schur decomposition^{} (http://planetmath.org/CorollaryOfSchurDecomposition)). Let $z=x+iy$ is any one of these eigenvectors with a complex, nonreal eigenvalue $\lambda $. Then $\lambda =1$ because $U$ is unitary, and $\overline{z}=xiy$ is another eigenvector with eigenvalue $\overline{\lambda}$. Using the ${V}^{\u2102}$ inner product formula^{}, the vectors $x/\sqrt{2}$ and $y/\sqrt{2}$ can be shown to be orthonormal. Then the proof of Lemma 1 shows that $T$ acts as a rotation in the plane $\mathrm{span}\{x,y\}$. All such planes obtained will be orthogonal to each other.
To summarize, the orthogonal subplanes of rotation are found by grouping conjugate pairs of complex eigenvectors. If one actually needs to determine the planes of rotation explicitly (for dimensions $n\ge 4$), then probably it is better to work directly with the complexified matrix, rather than to factor the matrix over the reals.

ii.
The decomposition of $T$ is not unique. However, it is always possible to obtain a decomposition which contains at most one reflection, because any two singledimensional reflections can always be combined into a twodimensional rotation. In any case, the parity of the number of reflections in a decomposition of $T$ is invariant, because the parity is equal to $detT$.

iii.
In the decomposition, the component rotations and reflections all commute because they act on orthogonal subspaces.

iv.
If we take a basis for $V$ describing the mutually orthogonal subspaces in Theorem 1, the matrix for $T$ looks like:
$$\left[\begin{array}{cc}\hfill \mathrm{cos}{\theta}_{1}\hfill & \hfill \mathrm{sin}{\theta}_{1}\hfill \\ \hfill \mathrm{sin}{\theta}_{1}\hfill & \hfill \mathrm{cos}{\theta}_{1}\hfill \\ \hfill \hfill & \hfill \hfill & \hfill \mathrm{\ddots}\hfill \\ \hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill \mathrm{cos}{\theta}_{k}\hfill & \hfill \mathrm{sin}{\theta}_{k}\hfill \\ \hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill \mathrm{sin}{\theta}_{k}\hfill & \hfill \mathrm{cos}{\theta}_{k}\hfill \\ \hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill \pm 1\hfill \\ \hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill +1\hfill \\ \hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill \mathrm{\ddots}\hfill \\ \hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill \hfill & \hfill +1\hfill \end{array}\right]$$ where ${\theta}_{1},\mathrm{\dots},{\theta}_{k}$ are the rotation angles, one for each orthogonal subplane, and the $\pm 1$ in the middle is the reflective component (if present). The rest of the entries in the matrix are zero.

v.
Sometimes any orthogonal operator $T$ with $detT=1$ is called a rotation, even though strictly speaking it is actually series of rotations (each on different “axes”). Similarly, when $detT=1$, $T$ may be called a reflection, even though again it is not always a single (onedimensional) reflection.
In this language^{}, a rotation composed with a rotation will always be a rotation; a rotation composed with a reflection is a reflection; and two reflections composed together will always be a rotation.

vi.
In ${\mathbb{R}}^{3}$, an orthogonal operator with positive^{} determinant^{} is necessarily a rotation on one axis which is left fixed (except when the operator is the identity). This follows simply because there is no way to fit more than one orthogonal subplane into threedimensional space.
A composition^{} of two rotations in ${\mathbb{R}}^{3}$ would then be a rotation too. On the other hand it is not at all obvious what relation^{} the axis of rotation of the composition has with the original two axes of rotation.
For an explicit formula for a rotation matrix in ${\mathbb{R}}^{3}$ that does not require manual calculation of the basis vectors for the rotation subplane, see Rodrigues’ rotation formula.

vii.
In ${\mathbb{R}}^{n}$, reflections can be carried out by first embedding^{} ${\mathbb{R}}^{n}$ into ${\mathbb{R}}^{n+1}$ and then rotating ${\mathbb{R}}^{n+1}$. (Here, the words “rotation” and “reflection” being taken in their extended sense of (v).) For example, in the plane, a right hand can be rotated in ${\mathbb{R}}^{3}$ into a left hand.
To be specific, suppose we embed ${\mathbb{R}}^{n}$ in ${\mathbb{R}}^{n+1}$ as the first $n$ coordinates. Then we gain an extra degree of freedom in the last coordinate of ${\mathbb{R}}^{n+1}$ (with coordinate vector ${e}_{n+1}$). Given an orthogonal operator $T:{\mathbb{R}}^{n}\to {\mathbb{R}}^{n}$, we can extend it to an operator on ${T}^{\prime}:{\mathbb{R}}^{n+1}\to {\mathbb{R}}^{n+1}$ by having it act as $T$ on the lower $n$ coordinates, and setting ${T}^{\prime}({e}_{n+1})={e}_{n+1}$. Since $det{T}^{\prime}=detT=1$, our new ${T}^{\prime}$ will be a rotation (the extra angle of rotation will be by $\pi $) that reflects sets in the ${\mathbb{R}}^{n}$ plane.
References
 1 Friedberg, Insel, Spence. Linear Algebra. PrenticeHall, 1997.
 2 Vladimir I. Arnol’d (trans. Roger Cooke). Ordinary Differential Equations. SpringerVerlag, 1992.
Title  decomposition of orthogonal operators as rotations and reflections 
Canonical name  DecompositionOfOrthogonalOperatorsAsRotationsAndReflections 
Date of creation  20130322 15:24:11 
Last modified on  20130322 15:24:11 
Owner  stevecheng (10074) 
Last modified by  stevecheng (10074) 
Numerical id  15 
Author  stevecheng (10074) 
Entry type  Theorem 
Classification  msc 15A04 
Related topic  RotationMatrix 
Related topic  OrthogonalMatrices 
Related topic  DimensionOfTheSpecialOrthogonalGroup 
Related topic  RodriguesRotationFormula 
Related topic  DerivationOfRotationMatrixUsingPolarCoordinates 
Related topic  DerivationOf2DReflectionMatrix 