You are here
HomeLeviCivita permutation symbol
Primary tabs
LeviCivita permutation symbol
Definition 1.
Let $k_{i}\in\{1,\cdots,n\}$ for all $i=1,\cdots,n$. The LeviCivita permutation symbols $\varepsilon_{{k_{1}\cdots k_{n}}}$ and $\varepsilon^{{k_{1}\cdots k_{n}}}$ are defined as
$\varepsilon_{{k_{1}\cdots k_{m}}}=\varepsilon^{{k_{1}\cdots k_{m}}}=\left\{% \begin{array}[]{ll}+1&\mbox{when}\,\{l\mapsto k_{l}\}\mbox{ is an even % permutation (of $\{1,\cdots,n\}$),}\\ 1&\mbox{when}\,\{l\mapsto k_{l}\}\mbox{ is an odd permutation,}\\ 0&\mbox{otherwise, i.e., when\,}k_{i}=k_{j},\ \mbox{for some }\ i\neq j.\\ \end{array}\right.$ 
The LeviCivita permutation symbol is a special case of the generalized Kronecker delta symbol. Using this fact one can write the LeviCivita permutation symbol as the determinant of an $n\times n$ matrix consisting of traditional delta symbols. See the entry on the generalized Kronecker symbol for details.
When using the LeviCivita permutation symbol and the generalized Kronecker delta symbol, the Einstein summation convention is usually employed. In the below, we shall also use this convention.
Properties

When $n=2$, we have for all $i,j,m,n$ in $\{1,2\}$,
$\displaystyle\varepsilon_{{ij}}\varepsilon^{{mn}}$ $\displaystyle=$ $\displaystyle\delta_{i}^{m}\delta_{j}^{n}\delta_{i}^{n}\delta_{j}^{m},$ (1) $\displaystyle\varepsilon_{{ij}}\varepsilon^{{in}}$ $\displaystyle=$ $\displaystyle\delta_{j}^{n},$ (2) $\displaystyle\varepsilon_{{ij}}\varepsilon^{{ij}}$ $\displaystyle=$ $\displaystyle 2.$ (3) 
When $n=3$, we have for all $i,j,k,m,n$ in $\{1,2,3\}$,
$\displaystyle\varepsilon_{{jmn}}\varepsilon^{{imn}}$ $\displaystyle=$ $\displaystyle 2\delta^{i}_{j},$ (4) $\displaystyle\varepsilon_{{ijk}}\varepsilon^{{ijk}}$ $\displaystyle=$ $\displaystyle 6.$ (5)
Let us prove these properties. The proofs are instructional since they demonstrate typical argumentation methods for manipulating the permutation symbols.
Proof. For equation 1, let us first note that both sides are antisymmetric with respect of $ij$ and $mn$. We therefore only need to consider the case $i\neq j$ and $m\neq n$. By substitution, we see that the equation holds for $\varepsilon_{{12}}\varepsilon^{{12}}$, i.e., for $i=m=1$ and $j=n=2$. (Both sides are then one). Since the equation is antisymmetric in $ij$ and $mn$, any set of values for these can be reduced the above case (which holds). The equation thus holds for all values of $ij$ and $mn$. Using equation 1, we have for equation 2
$\displaystyle\varepsilon_{{ij}}\varepsilon^{{in}}$  $\displaystyle=$  $\displaystyle\delta_{i}^{i}\delta_{j}^{n}\delta^{n}_{i}\delta^{i}_{j}$  
$\displaystyle=$  $\displaystyle 2\delta_{j}^{n}\delta^{n}_{j}$  
$\displaystyle=$  $\displaystyle\delta_{j}^{n}.$ 
Here we used the Einstein summation convention with $i$ going from $1$ to $2$. Equation 3 follows similarly from equation 2. To establish equation 4, let us first observe that both sides vanish when $i\neq j$. Indeed, if $i\neq j$, then one can not choose $m$ and $n$ such that both permutation symbols on the left are nonzero. Then, with $i=j$ fixed, there are only two ways to choose $m$ and $n$ from the remaining two indices. For any such indices, we have $\varepsilon_{{jmn}}\varepsilon^{{imn}}=(\varepsilon^{{imn}})^{2}=1$ (no summation), and the result follows. The last property follows since $3!=6$ and for any distinct indices $i,j,k$ in $\{1,2,3\}$, we have $\varepsilon_{{ijk}}\varepsilon^{{ijk}}=1$ (no summation). $\Box$
Examples and Applications.

The determinant of an $n\times n$ matrix $A=(a_{{ij}})$ can be written as
$\det A=\varepsilon_{{i_{1}\cdots i_{n}}}a_{{1i_{1}}}\cdots a_{{ni_{n}}},$ where each $i_{l}$ should be summed over $1,\ldots,n$.

If $A=(A^{1},A^{2},A^{3})$ and $B=(B^{1},B^{2},B^{3})$ are vectors in $\mathbbmss{R}^{3}$ (represented in some right hand oriented orthonormal basis), then the $i$th component of their cross product equals
$(A\times B)^{i}=\varepsilon^{{ijk}}A^{j}B^{k}.$ For instance, the first component of $A\times B$ is $A^{2}B^{3}A^{3}B^{2}$. From the above expression for the cross product, it is clear that $A\times B=B\times A$. Further, if $C=(C^{1},C^{2},C^{3})$ is a vector like $A$ and $B$, then the triple scalar product equals
$A\cdot(B\times C)=\varepsilon^{{ijk}}A^{i}B^{j}C^{k}.$ From this expression, it can be seen that the triple scalar product is antisymmetric when exchanging any adjacent arguments. For example, $A\cdot(B\times C)=B\cdot(A\times C)$.

Suppose $F=(F^{1},F^{2},F^{3})$ is a vector field defined on some open set of $\mathbbmss{R}^{3}$ with Cartesian coordinates $x=(x^{1},x^{2},x^{3})$. Then the $i$th component of the curl of $F$ equals
$(\nabla\times F)^{i}(x)=\varepsilon^{{ijk}}\frac{\partial}{\partial x^{j}}F^{k% }(x).$
Mathematics Subject Classification
05A10 no label found Forums
 Planetary Bugs
 HS/Secondary
 University/Tertiary
 Graduate/Advanced
 Industry/Practice
 Research Topics
 LaTeX help
 Math Comptetitions
 Math History
 Math Humor
 PlanetMath Comments
 PlanetMath System Updates and News
 PlanetMath help
 PlanetMath.ORG
 Strategic Communications Development
 The Math Pub
 Testing messages (ignore)
 Other useful stuff
Recent Activity
new correction: Error in proof of Proposition 2 by alex2907
Jun 24
new question: A good question by Ron Castillo
Jun 23
new question: A trascendental number. by Ron Castillo
Jun 19
new question: Banach lattice valued Bochner integrals by math ias
Jun 13
new question: young tableau and young projectors by zmth