decomposition of orthogonal operators as rotations and reflections
Theorem 1.
Let be a -dimensional real inner product space (). Then every orthogonal operator on can be decomposed into a series of two-dimensional rotations and one-dimensional reflections on mutually orthogonal subspaces of .
We first explain the general idea behind the proof. Consider a rotation of angle in a two-dimensional space. From its orthonormal basis representation
we find that the characteristic polynomial of is , with (complex) roots . (Not surprising, since multiplication by in the complex plane is rotation by .) Thus given the characteristic polynomial of , we can almost recover11Because the complex roots occur in conjugate pairs, the information about the sign of is lost. This too, is not a surprise, because the sign of , i.e. whether the rotation is clockwise or counterclockwise, is dependent on the orientation of the basis vectors. its rotation angle.
In the case of a reflection , the eigenvalues of are and ; so again, the characteristic polynomial of will provide some information on .
So in dimensions, we are also going to look at the complex eigenvalues and eigenvectors of to recover information about the rotations represented by .
But there is one technical point — is a transformation on a real vector space, so it does not really have “complex eigenvalues and eigenvectors”. To make this concept rigorous, we must consider the complexification of , the operator defined by in the vector space consisting of elements of the form , for . (For more details, see the entry on complexification (http://planetmath.org/ComplexificationOfVectorSpace).)
Lemma 1.
For any linear operator , there exists a one- or two-dimensional subspace which is invariant under .
Proof.
Consider and its characteristic polynomial. By the Fundamental Theorem of Algebra, the characteristic polynomial has a complex root . Then there is an eigenvector with eigenvalue . We have
Equating real and imaginary components, we see that
where . is two-dimensional if and are linearly independent; otherwise it is one-dimensional22 In fact, the space constructed is two-dimensional if and only if the eigenvalue is not purely real. Compare with the remark (i) after the proof of Theorem 1. Actually, Lemma 1 has more uses than just proving Theorem 1. For example, if is a linear differential equation, and the constant coefficient matrix has only simple eigenvalues, then it is a consequence of Lemma 1, that the differential equation decomposes into a series of disjoint one-variable and two-variable equations. The solutions are then readily understood: they are always of the form of an exponential multiplied by a sinusoid, and linear combinations thereof. The sinusoids will be present whenever there are non-real eigenvalues. . And we have as claimed. ∎
Proof of Theorem 1.
We recursively factor ; formally, the proof will be by induction on the dimension .
For larger , by Lemma 1 there exists a -invariant subspace . The orthogonal complement of is -invariant also, because for all and ,
because preserves inner product | ||||
because . |
Let be the operator that acts as on and is the identity on . Similarly, let be the operator that acts as on and is the identity on . Then . restricted to is orthogonal, and since is one- or two-dimensional, must therefore be a rotation or reflection (or the identity) in a line or plane.
restricted to is also orthogonal. has dimension , so by the induction hypothesis we can continue to factor it into operators acting on subspaces of that are mutually orthogonal. These subspaces will of course also be orthogonal to .
The proof is now complete, except that we did not rule out being a reflection even when is two-dimensional. But of course, if is a reflection in two dimensions, then it can be factored as a reflection on a one-dimensional subspace composed with the identity on . ∎
Actually, in the third paragraph of the proof, we implicitly assumed that every orthogonal operator on two-dimensional is either a rotation or reflection. This is a well-known fact, but it does need to be formally proven:
Theorem 2.
If is a two-dimensional real inner product space, then every orthogonal operator is either a rotation or reflection.
Proof.
Fix any orthonormal basis for . Since is orthogonal, , i.e. is a unit vector on the plane, so there exists an angle (unique modulo ) such that . Similarly is a unit vector, but since and are orthogonal, so are and . Then it is found that the solution to must be either or . Putting all this together, the matrix for is:
The first solution for corresponds to a rotation matrix (and ); the second solution for corresponds to a reflection matrix (http://planetmath.org/DerivationOf2DReflectionMatrix) (and ). ∎
1 Remarks
-
i.
Observe that the two equations for and appearing in the proof of Lemma 1 do specify a rotation of angle when and are orthonormal. So by examining the complex eigenvales and eigenvectors of , we can reconstruct the rotation.
This construction can be used to give an alternate proof of Theorem 1; we sketch it below:
The complexified space has an inner product structure inherited from (again, see complexification (http://planetmath.org/ComplexificationOfVectorSpace) for details). Let . Since is orthogonal, is unitary, and hence normal (). There exists an orthonormal basis of eigenvectors for (the Schur decomposition (http://planetmath.org/CorollaryOfSchurDecomposition)). Let is any one of these eigenvectors with a complex, non-real eigenvalue . Then because is unitary, and is another eigenvector with eigenvalue . Using the inner product formula, the vectors and can be shown to be orthonormal. Then the proof of Lemma 1 shows that acts as a rotation in the plane . All such planes obtained will be orthogonal to each other.
To summarize, the orthogonal subplanes of rotation are found by grouping conjugate pairs of complex eigenvectors. If one actually needs to determine the planes of rotation explicitly (for dimensions ), then probably it is better to work directly with the complexified matrix, rather than to factor the matrix over the reals.
-
ii.
The decomposition of is not unique. However, it is always possible to obtain a decomposition which contains at most one reflection, because any two single-dimensional reflections can always be combined into a two-dimensional rotation. In any case, the parity of the number of reflections in a decomposition of is invariant, because the parity is equal to .
-
iii.
In the decomposition, the component rotations and reflections all commute because they act on orthogonal subspaces.
-
iv.
If we take a basis for describing the mutually orthogonal subspaces in Theorem 1, the matrix for looks like:
where are the rotation angles, one for each orthogonal subplane, and the in the middle is the reflective component (if present). The rest of the entries in the matrix are zero.
-
v.
Sometimes any orthogonal operator with is called a rotation, even though strictly speaking it is actually series of rotations (each on different “axes”). Similarly, when , may be called a reflection, even though again it is not always a single (one-dimensional) reflection.
In this language, a rotation composed with a rotation will always be a rotation; a rotation composed with a reflection is a reflection; and two reflections composed together will always be a rotation.
-
vi.
In , an orthogonal operator with positive determinant is necessarily a rotation on one axis which is left fixed (except when the operator is the identity). This follows simply because there is no way to fit more than one orthogonal subplane into three-dimensional space.
A composition of two rotations in would then be a rotation too. On the other hand it is not at all obvious what relation the axis of rotation of the composition has with the original two axes of rotation.
For an explicit formula for a rotation matrix in that does not require manual calculation of the basis vectors for the rotation subplane, see Rodrigues’ rotation formula.
-
vii.
In , reflections can be carried out by first embedding into and then rotating . (Here, the words “rotation” and “reflection” being taken in their extended sense of (v).) For example, in the plane, a right hand can be rotated in into a left hand.
To be specific, suppose we embed in as the first coordinates. Then we gain an extra degree of freedom in the last coordinate of (with coordinate vector ). Given an orthogonal operator , we can extend it to an operator on by having it act as on the lower coordinates, and setting . Since , our new will be a rotation (the extra angle of rotation will be by ) that reflects sets in the plane.
References
- 1 Friedberg, Insel, Spence. Linear Algebra. Prentice-Hall, 1997.
- 2 Vladimir I. Arnol’d (trans. Roger Cooke). Ordinary Differential Equations. Springer-Verlag, 1992.
Title | decomposition of orthogonal operators as rotations and reflections |
Canonical name | DecompositionOfOrthogonalOperatorsAsRotationsAndReflections |
Date of creation | 2013-03-22 15:24:11 |
Last modified on | 2013-03-22 15:24:11 |
Owner | stevecheng (10074) |
Last modified by | stevecheng (10074) |
Numerical id | 15 |
Author | stevecheng (10074) |
Entry type | Theorem |
Classification | msc 15A04 |
Related topic | RotationMatrix |
Related topic | OrthogonalMatrices |
Related topic | DimensionOfTheSpecialOrthogonalGroup |
Related topic | RodriguesRotationFormula |
Related topic | DerivationOfRotationMatrixUsingPolarCoordinates |
Related topic | DerivationOf2DReflectionMatrix |