decomposition of orthogonal operators as rotations and reflections
Theorem 1.
Let V be a n-dimensional real inner product space (0<n<∞). Then every orthogonal
operator T on V
can be decomposed into a series of two-dimensional rotations
and one-dimensional reflections
on mutually orthogonal subspaces of V.
We first explain the general idea behind the proof. Consider a rotation R of angle θ in a two-dimensional space. From its orthonormal basis representation
[cosθ-sinθsinθcosθ], |
we find that the characteristic polynomial of R is t2-2cosθ+1,
with (complex) roots e±iθ.
(Not surprising, since multiplication
by eiθ in the complex plane is rotation
by θ.)
Thus given the characteristic polynomial of R,
we can almost recover11Because the complex roots occur in conjugate
pairs,
the information about the sign of θ is lost. This too, is not a surprise,
because the sign of θ, i.e. whether the rotation is clockwise or counterclockwise,
is dependent on the orientation of the basis vectors. its rotation angle.
In the case of a reflection S, the eigenvalues of S are -1 and 1; so again, the characteristic polynomial of S will provide some information on S.
So in n dimensions, we are also going to look at the complex eigenvalues and eigenvectors of T to recover information about the rotations represented by T.
But there is one technical point — T is a transformation on a real vector
space, so it does not really have “complex eigenvalues and eigenvectors”.
To make this concept rigorous, we must consider the complexification Tℂ of T,
the operator defined by Tℂ(x+iy)=Tx+iTy
in the vector space Vℂ consisting of elements of the form x+iy, for x,y∈V.
(For more details, see the entry on complexification (http://planetmath.org/ComplexificationOfVectorSpace).)
Lemma 1.
For any linear operator T:V→V, there exists a one- or two-dimensional subspace W which is invariant
under T.
Proof.
Consider Tℂ and its characteristic polynomial.
By the Fundamental Theorem of Algebra, the characteristic polynomial has a complex root λ=α+iβ. Then there is an eigenvector x+iy≠0 with eigenvalue λ. We have
Tx+iTy=Tℂ(x+iy)=λ(x+iy)=(αx-βy)+i(βx+αy). |
Equating real and imaginary components
,
we see that
Tx | =αx-βy∈W | ||
Ty | =βx+αy∈W |
where W=span{x,y}. W is two-dimensional if x and y are linearly independent; otherwise it is one-dimensional22
In fact, the space W constructed is two-dimensional if and only if the eigenvalue λ is not purely real. Compare with the remark (i) after
the proof of Theorem
1.
Actually, Lemma 1 has more uses than just proving Theorem 1.
For example, if ˙x=Ax is a linear differential equation,
and the constant coefficient matrix A has only simple eigenvalues,
then it is a consequence of Lemma 1, that the differential equation decomposes into a series of disjoint one-variable and two-variable equations.
The solutions are then readily understood: they are always of the form of an
exponential multiplied by a sinusoid, and linear combinations
thereof.
The sinusoids will be present whenever there are non-real eigenvalues.
. And we have T(W)⊆W as claimed.
∎
Proof of Theorem 1.
We recursively factor T; formally, the proof will be by induction on the dimension n.
The case n=1 is trivial. We have detT=±1; if detT=1 then T is the identity; otherwise Tx=-x is a reflection.
For larger n, by Lemma 1 there exists a T-invariant subspace W. The orthogonal complement
W⟂ of is -invariant also, because
for all and ,
because preserves inner product![]() |
||||
because . |
Let be the operator that acts as on and is the identity on . Similarly, let be the operator that acts as on and is the identity on . Then . restricted to is orthogonal, and since is one- or two-dimensional, must therefore be a rotation or reflection (or the identity) in a line or plane.
restricted to is also orthogonal. has dimension , so by the induction hypothesis we can continue to factor it into operators acting on subspaces of that are mutually orthogonal. These subspaces will of course also be orthogonal to .
The proof is now complete, except that we did not rule out being a reflection even
when is two-dimensional.
But of course, if is a reflection in two dimensions,
then it can be factored as a reflection on a one-dimensional subspace
composed with the identity on .
∎
Actually, in the third paragraph of the proof, we implicitly assumed that every orthogonal operator on two-dimensional is either a rotation or reflection. This is a well-known fact, but it does need to be formally proven:
Theorem 2.
If is a two-dimensional real inner product space, then every orthogonal operator is either a rotation or reflection.
Proof.
Fix any orthonormal basis for .
Since is orthogonal, , i.e. is a unit vector on the plane, so there exists an angle (unique modulo )
such that .
Similarly is a unit vector, but since and are orthogonal, so
are and . Then it is found that the solution to must be either or . Putting all this together, the matrix for is:
The first solution for corresponds to a rotation matrix (and );
the second solution for corresponds to a
reflection matrix (http://planetmath.org/DerivationOf2DReflectionMatrix) (and ).
∎
1 Remarks
-
i.
Observe that the two equations for and appearing in the proof of Lemma 1 do specify a rotation of angle when and are orthonormal. So by examining the complex eigenvales and eigenvectors of , we can reconstruct the rotation.
This construction can be used to give an alternate proof of Theorem 1; we sketch it below:
The complexified space has an inner product structure
inherited from (again, see complexification (http://planetmath.org/ComplexificationOfVectorSpace) for details). Let . Since is orthogonal, is unitary, and hence normal (). There exists an orthonormal basis of eigenvectors for (the Schur decomposition
(http://planetmath.org/CorollaryOfSchurDecomposition)). Let is any one of these eigenvectors with a complex, non-real eigenvalue . Then because is unitary, and is another eigenvector with eigenvalue . Using the inner product formula
, the vectors and can be shown to be orthonormal. Then the proof of Lemma 1 shows that acts as a rotation in the plane . All such planes obtained will be orthogonal to each other.
To summarize, the orthogonal subplanes of rotation are found by grouping conjugate pairs of complex eigenvectors. If one actually needs to determine the planes of rotation explicitly (for dimensions ), then probably it is better to work directly with the complexified matrix, rather than to factor the matrix over the reals.
-
ii.
The decomposition of is not unique. However, it is always possible to obtain a decomposition which contains at most one reflection, because any two single-dimensional reflections can always be combined into a two-dimensional rotation. In any case, the parity of the number of reflections in a decomposition of is invariant, because the parity is equal to .
-
iii.
In the decomposition, the component rotations and reflections all commute because they act on orthogonal subspaces.
-
iv.
If we take a basis for describing the mutually orthogonal subspaces in Theorem 1, the matrix for looks like:
where are the rotation angles, one for each orthogonal subplane, and the in the middle is the reflective component (if present). The rest of the entries in the matrix are zero.
-
v.
Sometimes any orthogonal operator with is called a rotation, even though strictly speaking it is actually series of rotations (each on different “axes”). Similarly, when , may be called a reflection, even though again it is not always a single (one-dimensional) reflection.
In this language
, a rotation composed with a rotation will always be a rotation; a rotation composed with a reflection is a reflection; and two reflections composed together will always be a rotation.
-
vi.
In , an orthogonal operator with positive
determinant
is necessarily a rotation on one axis which is left fixed (except when the operator is the identity). This follows simply because there is no way to fit more than one orthogonal subplane into three-dimensional space.
A composition
of two rotations in would then be a rotation too. On the other hand it is not at all obvious what relation
the axis of rotation of the composition has with the original two axes of rotation.
For an explicit formula for a rotation matrix in that does not require manual calculation of the basis vectors for the rotation subplane, see Rodrigues’ rotation formula.
-
vii.
In , reflections can be carried out by first embedding
into and then rotating . (Here, the words “rotation” and “reflection” being taken in their extended sense of (v).) For example, in the plane, a right hand can be rotated in into a left hand.
To be specific, suppose we embed in as the first coordinates. Then we gain an extra degree of freedom in the last coordinate of (with coordinate vector ). Given an orthogonal operator , we can extend it to an operator on by having it act as on the lower coordinates, and setting . Since , our new will be a rotation (the extra angle of rotation will be by ) that reflects sets in the plane.
References
- 1 Friedberg, Insel, Spence. Linear Algebra. Prentice-Hall, 1997.
- 2 Vladimir I. Arnol’d (trans. Roger Cooke). Ordinary Differential Equations. Springer-Verlag, 1992.
Title | decomposition of orthogonal operators as rotations and reflections |
Canonical name | DecompositionOfOrthogonalOperatorsAsRotationsAndReflections |
Date of creation | 2013-03-22 15:24:11 |
Last modified on | 2013-03-22 15:24:11 |
Owner | stevecheng (10074) |
Last modified by | stevecheng (10074) |
Numerical id | 15 |
Author | stevecheng (10074) |
Entry type | Theorem |
Classification | msc 15A04 |
Related topic | RotationMatrix |
Related topic | OrthogonalMatrices |
Related topic | DimensionOfTheSpecialOrthogonalGroup |
Related topic | RodriguesRotationFormula |
Related topic | DerivationOfRotationMatrixUsingPolarCoordinates |
Related topic | DerivationOf2DReflectionMatrix |