# simple tensor

The tensor product (http://planetmath.org/TensorProduct) \PMlinkescapephrasetensor product $U\otimes V$ of two vector spaces $U$ and $V$ is another vector space which is characterised by being universal for bilinear maps on $U\times V$. As part of this package, there is an operation $\otimes$ on vectors such that $\mathbf{u}\otimes\mathbf{v}\in U\otimes V$ for all $\mathbf{u}\in U$ and $\mathbf{v}\in V$, and the primary subject of this article is the image of that operation.

###### Definition 1.

The element $\mathbf{w}\in U\otimes V$ is said to be a simple tensor if there exist $\mathbf{u}\in U$ and $\mathbf{v}\in V$ such that $\mathbf{w}=\mathbf{u}\otimes\mathbf{v}$.

More generally, the element $\mathbf{w}\in W=U_{1}\otimes\cdots\otimes U_{k}$ is said to be a simple tensor (with respect to the decomposition $U_{1}\otimes\cdots\otimes U_{k}$ of $W$) if there exist $\mathbf{u}_{i}\in U_{i}$ for $i=1,\ldots,k$ such that $\mathbf{w}=\mathbf{u}_{1}\otimes\cdots\otimes\mathbf{u}_{k}$.

For this definition to be interesting, there must also be tensors which are not simple, and indeed most tensors aren’t. In order to illustrate why, it is convenient to consider the tensor product of two finite-dimensional vector spaces $U=\mathcal{K}^{m}$ and $V=\mathcal{K}^{n}$ over some field $\mathcal{K}$. In this case one can let $U\otimes V=\mathcal{K}^{m\times n}$ (the vector space of $m\times n$ matrices), since $\mathcal{K}^{m\times n}$ is isomorphic to any generic construction of $U\otimes V$ and the tensor product of two spaces is anyway only defined up to isomorphism. Furthermore considering elements of $U$ and $V$ as column vectors, the tensor product of vectors can be defined through

 $\mathbf{u}\otimes\mathbf{v}=\mathbf{u}\cdot\mathbf{v}^{\mathrm{T}}$

where $\cdot$ denotes the product of two matrices (in this case an $m\times 1$ matrix by a $1\times n$ matrix). As a very concrete example of this,

 $\begin{pmatrix}u_{1}\\ u_{2}\\ u_{3}\end{pmatrix}\otimes\begin{pmatrix}v_{1}\\ v_{2}\\ v_{3}\\ v_{4}\end{pmatrix}=\begin{pmatrix}u_{1}v_{1}&u_{1}v_{2}&u_{1}v_{3}&u_{1}v_{4}% \\ u_{2}v_{1}&u_{2}v_{2}&u_{2}v_{3}&u_{2}v_{4}\\ u_{3}v_{1}&u_{3}v_{2}&u_{3}v_{3}&u_{3}v_{4}\end{pmatrix}\text{.}$

One reason the simple tensors in $U\otimes V$ cannot exhaust this space (provded $m,n\geqslant 2$) is that there are essentially only $m+n-1$ degrees of freedom in the choice of a simple tensor, but $mn$ dimensions (http://planetmath.org/Dimension2) in the space $U\otimes V$ as a whole. Hence

 $\mathcal{K}^{m}\otimes\mathcal{K}^{n}\neq\left\{\,\mathbf{u}\otimes\mathbf{v}% \,\,\vrule width 1px\,\,\mathbf{u}\in\mathcal{K}^{m},\mathbf{v}\in\mathcal{K}^% {n}\,\right\}\qquad\text{when $$m,n\geqslant 2$$.}$

How can one to understand the non-simple tensors, then? In general, they are finite sums of simple tensors. One way to see this is from the theorem that $U\otimes V$ has a basis consisting of products of pairs of basis vectors.

###### Theorem 2 (tensor product basis (http://planetmath.org/TensorProductBasis)).

Let $U$ and $V$ be vector spaces over $\mathcal{K}$ with bases $\{\mathbf{e}_{i}\}_{i\in I}$ and $\{\mathbf{f}_{j}\}_{j\in J}$ respectively. Then $\{\mathbf{e}_{i}\otimes\mathbf{f}_{j}\}_{(i,j)\in I\times J}$ is a basis for $U\otimes V$.

Expressing some arbitrary $\mathbf{w}\in U\otimes V$ as a linear combination

 $\mathbf{w}=\sum_{r=1}^{n}\lambda_{r}(\mathbf{e}_{i_{r}}\otimes\mathbf{f}_{j_{r% }})$

with respect to such a basis immediately produces the decomposition

 $\mathbf{w}=\sum_{r=1}^{n}(\lambda_{r}\mathbf{e}_{i_{r}})\otimes\mathbf{f}_{j_{% r}}$

as a sum of simple tensors, but this decomposition is often far from optimally short. Let $\mathbf{e}_{1}=\left(\begin{smallmatrix}1\\ 0\end{smallmatrix}\right)\in\mathcal{K}^{2}$ and $\mathbf{e}_{2}=\left(\begin{smallmatrix}0\\ 1\end{smallmatrix}\right)\in\mathcal{K}^{2}$. The tensor $\mathbf{e}_{1}\otimes\mathbf{e}_{1}+\mathbf{e}_{2}\otimes\mathbf{e}_{2}=\left(% \begin{smallmatrix}1&0\\ 0&1\end{smallmatrix}\right)$ is not simple, but as it happens the tensor $\mathbf{e}_{1}\otimes\mathbf{e}_{1}+\mathbf{e}_{1}\otimes\mathbf{e}_{2}+% \mathbf{e}_{2}\otimes\mathbf{e}_{1}+\mathbf{e}_{2}\otimes\mathbf{e}_{2}=\left(% \begin{smallmatrix}1&1\\ 1&1\end{smallmatrix}\right)=\left(\begin{smallmatrix}1\\ 1\end{smallmatrix}\right)\otimes\left(\begin{smallmatrix}1\\ 1\end{smallmatrix}\right)$ is simple. In general it is not trivial to find the simplest way of expressing a tensor as a sum of simple tensors, so there is a name for the length of the shortest such sum.

###### Definition 3.

The rank of a tensor $\mathbf{w}$ is the smallest natural number $n$ such that $\mathbf{w}=\mathbf{w}_{1}+\cdots+\mathbf{w}_{n}$ for some set of $n$ simple tensors $\mathbf{w}_{1}$, …, $\mathbf{w}_{n}$.

In particular, the zero tensor has rank $0$, and all other simple tensors have rank $1$.

• Warning.  There is an entirely different concept which is also called ‘the rank of a tensor (http://planetmath.org/Tensor)’, namely the number of components (factors) in the tensor product forming the space in which the tensor lives. This latter ‘rank’ concept does not generalise ‘rank of a matrix (http://planetmath.org/RankLinearMapping)’. The ‘rank’ of Definition 3 does generalise ‘rank of a matrix’. (It also generalises rank of a quadratic form (http://planetmath.org/Rank5).)

One area where the distinction between simple and non-simple tensors is particularly important is in Quantum Mechanics, because the state space of a pair of quantum systems is in general the tensor product of the state spaces of the component systems. When the combined state is a simple tensor $\mathbf{w}=\mathbf{u}\otimes\mathbf{v}$, then that state can be understood as though one subsystem has state $\mathbf{u}$ and the other state $\mathbf{v}$, but when the combined state $\mathbf{w}$ is a non-simple tensor $\mathbf{u}_{1}\otimes\mathbf{v}_{1}+\mathbf{u}_{2}\otimes\mathbf{v}_{2}$ then the full system cannot be understood by considering the two subsystems in isolation, even if there is no apparent interaction between them. This situation is often described by saying that the two subsystems are entangled, or using phrases such as “either $U$ is in state $\mathbf{u}_{1}$ and $V$ is in state $\mathbf{v}_{1}$, or else $U$ is in state $\mathbf{u}_{2}$ and $V$ is in state $\mathbf{v}_{2}$.” Entanglement is an important part of that which makes quantum systems different from probabilistic classical systems. The physical interpretations are often mind-boggling, but the mathematical meaning is no more mysterious than ‘non-simple tensor’.
Entanglement can also be a useful concept for understanding pure mathematics. One reason that the comultiplication $\Delta\colon C\longrightarrow C\otimes C$ of a coalgebra $C$ cannot simply be replaced in the definition by two maps $\Delta_{L},\Delta_{R}\colon C\longrightarrow C$ that compute the ‘left’ and ‘right’ parts of $\Delta$ is that value of $\Delta$ may be entangled, in which case one left part $\Delta_{L}(c)$ and one right part $\Delta_{R}(c)$ cannot fully encode $\Delta(c)$.