# tensor transformations

The present entry employs the terminology and notation defined and described in the entry on tensor arrays and basic tensors. To keep things reasonably self contained we mention that the symbol $\mathrm{T}^{p,q}$ refers to the vector space of type $(p,q)$ tensor arrays, i.e. maps

 $I^{p}\times I^{q}\rightarrow\mathbb{K},$

where $I$ is some finite list of index labels, and where $\mathbb{K}$ is a field. The symbols $\varepsilon_{(i)},\varepsilon^{(i)},\;i\in I$ refer to the column and row vectors giving the natural basis of $\mathrm{T}^{1,0}$ and $\mathrm{T}^{0,1}$, respectively.

Let $I$ and $J$ be two finite lists of equal cardinality, and let

 $T:\mathbb{K}^{I}\rightarrow\mathbb{K}^{J}$

be a linear isomorphism. Every such isomorphism is uniquely represented by an invertible matrix

 $M:J\times I\rightarrow\mathbb{K}$

with entries given by

 $M^{j}_{\!\hphantom{j}i}=\left(T\varepsilon_{(i)}\right)^{j},\quad i\in I,\;j% \in J.$

In other words, the action of $T$ is described by the following substitutions

 $\varepsilon_{(i)}\mapsto\sum_{j\in J}M^{j}_{\!\hphantom{j}i}\,\varepsilon_{(j)% },\quad i\in I.$ (1)

Equivalently, the action of $T$ is given by matrix-multiplication of column vectors in $\mathbb{K}^{I}$ by $M$.

The corresponding substitutions relations for the type $(0,1)$ tensors involve the inverse matrix $M^{-1}:I\times J\rightarrow\mathbb{K},$ and take the form11The above relations describe the action of the dual homomorphism of the inverse transformation $\left(T^{-1}\right)^{*}:\left(\mathbb{K}^{I}\right)^{*}\rightarrow\left(% \mathbb{K}^{J}\right)^{*}.$

 $\varepsilon^{(i)}\mapsto\sum_{j\in J}\left(M^{-1}\right)^{i}_{\!\hphantom{i}j}% \,\varepsilon^{(j)},\quad i\in I.$ (2)

The rules for type $(0,1)$ substitutions are what they are, because of the requirement that the $\varepsilon_{(i)}$ and $\varepsilon^{(i)}$ remain dual bases even after the substitution. In other words we want the substitutions to preserve the relations

 $\varepsilon^{(i_{1})}\varepsilon_{(i_{2})}=\delta^{\,i_{1}}_{\;i_{2}},\quad i_% {1},i_{2}\in I,$

where the left-hand side of the above equation features the inner product and the right-hand side the Kronecker delta. Given that the vector basis transforms as in (1) and given the above constraint, the substitution rules for the linear form basis, shown in (2), are the only such possible.

The classical terminology of contravariant and covariant indices is motivated by thinking in term of substitutions. Thus, suppose we perform a linear substitution and change a vector, i.e. a type $(1,0)$ tensor, $X\in\mathbb{K}^{I}$ into a vector $Y\in\mathbb{K}^{J}$. The indexed values of the former and of the latter are related by

 $Y^{j}=\sum_{i\in I}M^{j}_{\!\hphantom{j}i}\,X^{i},\quad j\in J.$ (3)

Thus, we see that the “transformation rule” for indices is contravariant to the substitution rule (1) for basis vectors.

In modern terms, this contravariance is best described by saying that the dual space space construction is a contravariant functor22See the entry on the dual homomorphism.. In other words, the substitution rule for the linear forms, i.e. the type $(0,1)$ tensors, is contravariant to the substitution rule for vectors:

 $\varepsilon^{(j)}\mapsto\sum_{i\in I}M^{j}_{\!\hphantom{j}i}\,\varepsilon^{(i)% },\quad j\in J,$ (4)

in full agreement with the relation shown in (2). Everything comes together, and equations (3) and (4) are seen to be one and the same, once we remark that tensor array values can be obtained by contracting with characteristic arrays. For example,

 $X^{i}=\varepsilon^{(i)}(X),\quad i\in I;\qquad Y^{j}=\varepsilon^{(j)}(Y),% \quad j\in J.$

Finally we must remark that the transformation rule for covariant indices involves the inverse matrix $M^{-1}$. Thus if $\alpha\in\mathrm{T}^{0,1}(I)$ is transformed to a $\beta\in\mathrm{T}^{0,1}$ the indices will be related by

 $\beta_{j}=\sum_{i\in I}\left(M^{-1}\right)^{i}_{\!\hphantom{i}j}\,\alpha_{i},% \quad j\in J.$

The most general transformation rule for tensor array indices is therefore the following: the indexed values of a tensor array $X\in\mathrm{T}^{p,q}(I)$ and the values of the transformed tensor array $Y\in\mathrm{T}^{p,q}(J)$ are related by

 $Y^{\!j_{1}\ldots j_{p}}_{\;l_{1}\ldots l_{q}}=\!\!\sum_{\genfrac{}{}{0.0pt}{}{% i_{1},\ldots,i_{p}\in I^{p}}{k_{1},\ldots,k_{q}\in I^{q}}}\!\!M^{j_{1}}_{\,i_{% 1}}\cdot\cdot\cdot M^{j_{p}}_{\,i_{p}}\left(M^{-1}\right)^{\!k_{1}}_{l_{1}}% \cdot\cdot\cdot\left(M^{-1}\right)^{\!k_{q}}_{l_{q}}X^{i_{1}\ldots i_{p}}_{k_{% 1}\ldots k_{q}},$

for all possible choice of indices $\quad j_{1},\ldots j_{p},l_{1},\ldots,l_{q}\in J.$ Debauche of indices, indeed!

Title tensor transformations TensorTransformations 2013-03-22 12:40:33 2013-03-22 12:40:33 rmilson (146) rmilson (146) 4 rmilson (146) Derivation msc 15A69