## You are here

Homecommuting matrices

## Primary tabs

# commuting matrices

We consider the properties of commuting matrices and linear transformations over a vector space $V$. Two linear transformations $\varphi_{i}:V\rightarrow V$, $i=1,2$ are said to commute if for every $v\in V$,

$\varphi_{1}(\varphi_{2}(v))=\varphi_{2}(\varphi_{1}(v)).$ |

If $V$ has finite dimension $n$ and we fix a basis of $V$ then we may represent the linear transformations as $n\times n$ matrices $A_{i}$ and here the condition of commuting linear transformations is equivalent to testing if their corresponding matrices commute:

$A_{1}A_{2}=A_{2}A_{1}.$ |

Simultaneous triangularisation of commuting matrices over any field can be achieved but may require an extension of the field. The reason begins to be apparent from the study of eigenvalues.

###### Remark 1.

Because the implication of commuting matrices is best expressed through eigenvectors, we prefer the treatment of linear transformations for the moment.

Recall a linear transformation $f:V\rightarrow V$ is said to leave a subspace $E\leq V$ *invariant* if $f(E)\leq E$.

###### Proposition 2.

If $\{\varphi\}_{{i\in I}}$ are commuting linear transformations and $E$ is an eigenspace of $\varphi_{{i_{0}}}$ for some $i_{0}\in I$, then for all $i\in I$, $\varphi_{i}(E)\leq E$.

###### Proof.

Let $\lambda$ be the eigenvalue of $\varphi_{{i_{0}}}$ on $E$. Take any $i\in I$ and $v\in E$. Then

$\varphi_{{i_{0}}}(\varphi_{i}(v))=\varphi_{i}(\varphi_{{i_{0}}}(v))=\varphi_{i% }(\lambda v)=\lambda\varphi_{i}(v).$ |

Therefore $\varphi_{i}(v)\in E$ as $E$ is the $\lambda$ eigenspace of $\varphi_{{i_{0}}}$. In particular, $\varphi_{i}(E)\leq E$. ∎

We have just shown that commuting linear transformations preserve each other’s eigenspaces. This property does not depend on a finite dimension for $V$ or a finite set of commuting transformations. However, to characterize commuting linear transformations further will require that $V$ have finite dimension.

###### Proposition 3.

Let $V$ be a finite dimensional vector space and let $\{\varphi\}_{{i\in I}}$ be a family of commuting diagonalizable linear transformations from $V$ to $V$. Then $\varphi_{i}$ can be simultaneously diagonalized.

###### Proof.

If a finite dimensional linear transformation is diagonalizable over its field then it has all its eigenvalues in the field (under some basis the matrix is diagonal and the eigenvalues are simply those elements on the diagonal.)

If all the eigenvalues of a linear transformation are the same then the associated diagonal matrix is scalar. If all $\varphi_{i}$ are scalar then they are simultaneously diagonalized.

Now presume that each $\varphi_{i}$ is not a scalar transformation. Hence there are at least two distinct eigenspaces. It follows each eigenspace of $\varphi_{i}$ has dimension less than that of $V$.

Now we set up an induction on the dimension of $V$. When the dimension of $V$ is 1, all linear transformations are scalar. Now suppose that for all vector spaces of dimension $n$, any commuting diagonalizable linear transformations can be simultaneously diagonalized. Then in the case where $\dim V=n+1$, either all the linear transformations are scalar and so simultaneously diagonalized, or at least one is not scalar in which case its eigenspaces are proper subspaces. Since the maps commute they respect each others eigenspaces. So we restrict the maps to any eigenspace and by induction simultaneously diagonalize on this subspace. As the linear transformations are diagonalizable, the sum of the eigenspaces of any $\varphi_{i}$ is $V$ so this process simultaneously diagonalizes each of the $\varphi_{i}$. ∎

Of course it is possible to have commuting matrices which are not diagonalizable. At the other extreme are unipotent matrices, that is, matrices with all eigenvalues 1. Aside from the identity matrix, unipotent matrices are never diagonal. Yet they often commute. But here the generalized eigenspaces substitute for the usual eigenspaces.

It is generally not true that two unipotent matrices commute, even if they share the same eigenspace. For example, the set of unitriangular matrices forms a nilpotent group which is abelian only for $2\times 2$-matrices.

However, if we consider unipotent matrices of the form

$\begin{bmatrix}I_{k}&*\\ 0&I_{j}\end{bmatrix}$ |

we find these to correspond to $k\times j$ matrices under addition. Thus this large family of unipotent matrices do commute.

## Mathematics Subject Classification

15A04*no label found*

- Forums
- Planetary Bugs
- HS/Secondary
- University/Tertiary
- Graduate/Advanced
- Industry/Practice
- Research Topics
- LaTeX help
- Math Comptetitions
- Math History
- Math Humor
- PlanetMath Comments
- PlanetMath System Updates and News
- PlanetMath help
- PlanetMath.ORG
- Strategic Communications Development
- The Math Pub
- Testing messages (ignore)

- Other useful stuff

## Recent Activity

new question: Prove a formula is part of the Gentzen System by LadyAnne

Mar 30

new question: A problem about Euler's totient function by mbhatia

new problem: Problem: Show that phi(a^n-1), (where phi is the Euler totient function), is divisible by n for any natural number n and any natural number a >1. by mbhatia

new problem: MSC browser just displays "No articles found. Up to ." by jaimeglz

Mar 26

new correction: Misspelled name by DavidSteinsaltz

Mar 21

new correction: underline-typo by Filipe

Mar 19

new correction: cocycle pro cocyle by pahio

Mar 7

new image: plot W(t) = P(waiting time <= t) (2nd attempt) by robert_dodier

new image: expected waiting time by robert_dodier

new image: plot W(t) = P(waiting time <= t) by robert_dodier

## Comments

## induction proof

One step in the proof I fail to understand. When the operators are restricted to an eigenspace of one of them why are they still diagonalizable?

## Re: induction proof

Is there no one proficient in linear algebra?