# determinant as a multilinear mapping

Let $\mathbf{M}=(M_{ij})$ be an $n\times n$ matrix with entries in a field $K$. The matrix $\mathbf{M}$ is really the same thing as a list of $n$ column vectors of size $n$. Consequently, the determinant operation may be regarded as a mapping

 $\det:\overbrace{K^{n}\times\ldots\times K^{n}}^{n\mbox{ times}}\rightarrow K$

The determinant of a matrix $\mathbf{M}$ is then defined to be $\det(\mathbf{M}_{1},\ldots,\mathbf{M}_{n}),$ where $\mathbf{M}_{j}\in K^{n}$ denotes the $j^{\text{th}}$ column of $\mathbf{M}$.

Starting with the definition

 $\det(\mathbf{M}_{1},\ldots,\mathbf{M}_{n})=\sum_{\pi\in S_{n}}\mathrm{sgn}(\pi% )M_{1\pi_{1}}M_{2\pi_{2}}\cdots M_{n\pi_{n}}$ (1)

the following properties are easily established:

1. 1.

the determinant is multilinear;

2. 2.

the determinant is anti-symmetric;

3. 3.

the determinant of the identity matrix is $1$.

These three properties uniquely characterize the determinant, and indeed can — some would say should — be used as the definition of the determinant operation.

Let us prove this. We proceed by representing elements of $K^{n}$ as linear combinations of

 $\mathbf{e}_{1}=\begin{pmatrix}1\\ 0\\ 0\\ \vdots\\ 0\end{pmatrix},\quad\mathbf{e}_{2}=\begin{pmatrix}0\\ 1\\ 0\\ \vdots\\ 0\end{pmatrix},\quad\ldots\quad\mathbf{e}_{n}=\begin{pmatrix}0\\ 0\\ 0\\ \vdots\\ 1\end{pmatrix},$

the standard basis of $K^{n}$. Let $\mathbf{M}$ be an $n\times n$ matrix. The $j^{\text{th}}$ column is represented as $\sum_{i}M_{ij}\mathbf{e}_{i}$; whence using multilinearity

 $\displaystyle\det(\mathbf{M})$ $\displaystyle=\det\left(\sum_{i}M_{i1}\mathbf{e}_{i}\,,\sum_{i}M_{i2}\mathbf{e% }_{i}\,,\;\ldots\;,\sum_{i}M_{in}\mathbf{e}_{i}\right)$ $\displaystyle=\sum_{i_{1},\ldots,i_{n}=1}^{n}M_{i_{1}1}M_{i_{2}2}\cdots M_{i_{% n}n}\det(\mathbf{e}_{i_{1}},\mathbf{e}_{i_{2}},\ldots,\mathbf{e}_{i_{n}})$

The anti-symmetry assumption implies that the expressions $\det(\mathbf{e}_{i_{1}},\mathbf{e}_{i_{2}},\ldots,\mathbf{e}_{i_{n}})$ vanish if any two of the indices $i_{1},\ldots,i_{n}$ coincide. If all $n$ indices are distinct,

 $\det(\mathbf{e}_{i_{1}},\mathbf{e}_{i_{2}},\ldots,\mathbf{e}_{i_{n}})=\pm\det(% \mathbf{e}_{1},\ldots,\mathbf{e}_{n}),$

the sign in the above expression being determined by the number of transpositions required to rearrange the list $(i_{1},\ldots,i_{n})$ into the list $(1,\ldots,n)$. The sign is therefore the parity of the permutation $(i_{1},\ldots,i_{n})$. Since we also assume that

 $\det(\mathbf{e}_{1},\ldots,\mathbf{e}_{n})=1,$

we now recover the original definition (1).

Title determinant as a multilinear mapping DeterminantAsAMultilinearMapping 2013-03-22 13:09:17 2013-03-22 13:09:17 rmilson (146) rmilson (146) 5 rmilson (146) Theorem msc 15A15 ExteriorAlgebra