simultaneous blockdiagonalization of upper triangular commuting matrices
Let ${\mathbf{e}}_{i}$ denote the (column) vector whose $i$th position is $1$ and where all other positions are $0$. Denote by $[n]$ the set $\{1,\mathrm{\dots},n\}$. Denote by ${\mathrm{M}}_{n}(\mathcal{K})$ the set of all $n\times n$ matrices over $\mathcal{K}$, and by ${\mathrm{GL}}_{n}(\mathcal{K})$ the set of all invertible elements of ${\mathrm{M}}_{n}(\mathcal{K})$. Let ${d}_{i}$ be the function which extracts the $i$th diagonal element of a matrix, i.e., ${d}_{i}(A)={\mathbf{e}}_{i}^{\mathrm{T}}A{\mathbf{e}}_{i}$.
Theorem 1.
Let $\mathrm{K}$ be a field, let $n$ be a positive integer, and let $\mathrm{\sim}$ be an equivalence relation^{} on $\mathrm{[}n\mathrm{]}$ such that if $i\mathrm{\sim}j$ and $i\mathrm{\u2a7d}k\mathrm{\u2a7d}j$ then $k\mathrm{\sim}i$. Let ${A}_{\mathrm{1}}\mathrm{,}\mathrm{\dots}\mathrm{,}{A}_{r}\mathrm{\in}{\mathrm{M}}_{n}\mathit{}\mathrm{(}\mathrm{K}\mathrm{)}$ be pairwise commuting upper triangular matrices^{}. If these matrices and $\mathrm{\sim}$ are related such that
$$i\sim j\mathit{\hspace{1em}}\mathit{\text{if and only if}}\mathit{\hspace{1em}}{d}_{i}({A}_{k})={d}_{j}({A}_{k})\mathit{\text{for all}}k\in [r]\mathit{\text{,}}$$ 
then there exists a matrix $B\mathrm{\in}{\mathrm{GL}}_{n}\mathit{}\mathrm{(}\mathrm{K}\mathrm{)}$ such that:

1.
If ${\mathbf{e}}_{i}^{\mathrm{T}}{B}^{1}{A}_{k}B{\mathbf{e}}_{j}\ne 0$ then $i\sim j$ and $i\u2a7dj$.

2.
If $i\sim j$ then ${\mathbf{e}}_{i}^{\mathrm{T}}{B}^{1}{A}_{k}B{\mathbf{e}}_{j}={\mathbf{e}}_{i}^{\mathrm{T}}{A}_{k}{\mathbf{e}}_{j}$.
Condition 1 says that if an element of ${B}^{1}{A}_{k}B$ is nonzero then both its row and column indices must belong to the same equivalence class^{} of $\sim $, i.e., the nonzero elements of ${B}^{1}{A}_{k}B$ only occur in particular blocks (http://planetmath.org/PartitionedMatrix) along the diagonal, and these blocks correspond to equivalence classes of $\sim $. Condition 2 says that within one of these blocks, ${B}^{1}{A}_{k}B$ is equal to ${A}_{k}$.
The proof of the theorem requires the following lemma.
Lemma 2.
Let a sequence ${A}_{\mathrm{1}}\mathrm{,}\mathrm{\dots}\mathrm{,}{A}_{r}\mathrm{\in}{\mathrm{M}}_{n}\mathit{}\mathrm{(}\mathrm{K}\mathrm{)}$ of upper triangular matrices be given, and denote by $\mathrm{A}$ the unital (http://planetmath.org/unity) algebra^{} (http://planetmath.org/Algebra) generated by these matrices. For every sequence ${\lambda}_{\mathrm{1}}\mathrm{,}\mathrm{\dots}\mathrm{,}{\lambda}_{r}\mathrm{\in}\mathrm{K}$ of scalars there exists a matrix $C\mathrm{\in}\mathrm{A}$ such that
$${d}_{i}(C)=\{\begin{array}{cc}1\hfill & \mathit{\text{if}}{d}_{i}({A}_{k})={\lambda}_{k}\mathit{\text{for all}}k\in [r]\mathit{\text{,}}\hfill \\ 0\hfill & \text{\mathit{o}\mathit{t}\u210e\mathit{e}\mathit{r}\mathit{w}\mathit{i}\mathit{s}\mathit{e}}\hfill \end{array}$$ 
for all $i\mathrm{\in}\mathrm{[}n\mathrm{]}$.
The proof of that lemma can be found in this article (http://planetmath.org/CharacteristicMatrixOfDiagonalElementCrossSection).
Proof of theorem.
The proof is by induction^{} on the number of equivalence classes of $\sim $. If there is only one equivalence class then one can take $B=I$.
If there is more than one equivalence class, then let $S$ be the equivalence class that contains $n$. By Lemma 2 there exists a matrix $C$ in the unital algebra generated by ${A}_{1},\mathrm{\dots},{A}_{r}$ (hence necessarily upper triangular) such that ${d}_{i}(C)=1$ for all $i\in S$ and ${d}_{i}(C)=0$ for all $i\in [n]\setminus S$. Thus $C$ has a
$$C=\left(\begin{array}{cc}\hfill {C}_{11}\hfill & \hfill {C}_{12}\hfill \\ \hfill 0\hfill & \hfill {C}_{22}\hfill \end{array}\right)$$ 
where ${C}_{22}$ is a $S\times S$ matrix that has all $1$s on the diagonal, and ${C}_{11}$ is a $\left(nS\right)\times \left(nS\right)$ matrix that has all $0$s on the diagonal.
Let $k\in [r]$ be arbitrary and similarly decompose
${C}^{n}=$  $\left(\begin{array}{cc}\hfill {D}_{11}\hfill & \hfill {D}_{12}\hfill \\ \hfill 0\hfill & \hfill {D}_{22}\hfill \end{array}\right)\text{,}$  ${A}_{k}=$  $\left(\begin{array}{cc}\hfill {A}_{11}\hfill & \hfill {A}_{12}\hfill \\ \hfill 0\hfill & \hfill {A}_{22}\hfill \end{array}\right)\text{.}$ 
One can identify ${D}_{11}={({C}_{11})}^{n}$ and ${D}_{22}={({C}_{22})}^{n}$, but due to the zero diagonal of ${C}_{11}$ and the fact that the of these matrices are smaller than $n$, the more striking equality ${D}_{11}=0$ also holds. As for ${D}_{22}$, one may conclude that it is invertible^{}.
Since the algebra that ${C}^{n}$ belongs to was generated by pairwise commuting elements, it is a commutative^{} (http://planetmath.org/Commutative) algebra, and in particular ${C}^{n}{A}_{k}={A}_{k}{C}^{n}$. In of the individual blocks, this becomes
$$\left(\begin{array}{cc}\hfill 0\hfill & \hfill {D}_{12}{A}_{22}\hfill \\ \hfill 0\hfill & \hfill {D}_{22}{A}_{22}\hfill \end{array}\right)=\left(\begin{array}{cc}\hfill 0\hfill & \hfill {A}_{11}{D}_{12}+{A}_{12}{D}_{22}\hfill \\ \hfill 0\hfill & \hfill {A}_{22}{D}_{22}\hfill \end{array}\right)\text{.}$$ 
Now let
$$D=\left(\begin{array}{cc}\hfill I\hfill & \hfill {D}_{12}\hfill \\ \hfill 0\hfill & \hfill {D}_{22}\hfill \end{array}\right)\text{, so that}{D}^{1}=\left(\begin{array}{cc}\hfill I\hfill & \hfill {D}_{12}{D}_{22}^{1}\hfill \\ \hfill 0\hfill & \hfill {D}_{22}^{1}\hfill \end{array}\right)$$ 
and consider the matrix ${D}^{1}{A}_{k}D$. Clearly
$$\begin{array}{c}{D}^{1}{A}_{k}D={D}^{1}\left(\begin{array}{cc}\hfill {A}_{11}\hfill & \hfill {A}_{12}\hfill \\ \hfill 0\hfill & \hfill {A}_{22}\hfill \end{array}\right)\left(\begin{array}{cc}\hfill I\hfill & \hfill {D}_{12}\hfill \\ \hfill 0\hfill & \hfill {D}_{22}\hfill \end{array}\right)=\hfill \\ \hfill ={D}^{1}\left(\begin{array}{cc}\hfill {A}_{11}\hfill & \hfill {A}_{11}{D}_{12}+{A}_{12}{D}_{22}\hfill \\ \hfill 0\hfill & \hfill {A}_{22}{D}_{22}\hfill \end{array}\right)=\left(\begin{array}{cc}\hfill I\hfill & \hfill {D}_{12}{D}_{22}^{1}\hfill \\ \hfill 0\hfill & \hfill {D}_{22}^{1}\hfill \end{array}\right)\left(\begin{array}{cc}\hfill {A}_{11}\hfill & \hfill {D}_{12}{A}_{22}\hfill \\ \hfill 0\hfill & \hfill {D}_{22}{A}_{22}\hfill \end{array}\right)=\\ \hfill =\left(\begin{array}{cc}\hfill {A}_{11}\hfill & \hfill {D}_{12}{A}_{22}{D}_{12}{D}_{22}^{1}{D}_{22}{A}_{22}\hfill \\ \hfill 0\hfill & \hfill {D}_{22}^{1}{D}_{22}{A}_{22}\hfill \end{array}\right)=\left(\begin{array}{cc}\hfill {A}_{11}\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill {A}_{22}\hfill \end{array}\right)\end{array}$$ 
so that the positions with row not in $S$ and column in $S$ are all zero, as requested for ${B}^{1}{A}_{k}B$. It should be observed that the choice of $D$ is independent of $k$, and that the same $D$ thus works for all the ${A}_{k}$.
In order to complete^{} the proof, one applies the induction hypothesis to the restriction^{} of $\sim $ to $[n]\setminus S$ and the corresponding submatrices^{} of ${D}^{1}{A}_{k}D$, which satisfy the same conditions but have one equivalence class less. This produces a blockdiagonalising matrix ${B}^{\prime}$ for these submatrices, and thus the sought $B$ can be constructed as $D\left(\begin{array}{cc}\hfill {B}^{\prime}\hfill & \hfill 0\hfill \\ \hfill 0\hfill & \hfill I\hfill \end{array}\right)$. ∎
Title  simultaneous blockdiagonalization of upper triangular commuting matrices 

Canonical name  SimultaneousBlockdiagonalizationOfUpperTriangularCommutingMatrices 
Date of creation  20130322 15:29:42 
Last modified on  20130322 15:29:42 
Owner  lars_h (9802) 
Last modified by  lars_h (9802) 
Numerical id  5 
Author  lars_h (9802) 
Entry type  Theorem 
Classification  msc 15A21 