# lecture notes on determinants

## 1 Introduction.

The determinant operation is an algebraic formula involving addition and multiplication that combines the $n^{2}$ entries of an $n\times n$ matrix of numbers into a single number. The determinant has many useful and surprising properties. In particular the determinant “determines” whether or not a matrix is singular. If the determinant is zero, the matrix is singular; if not, the matrix is invertible.

## 2 Notation.

We can regard an $n\times n$ matrix $A$ as an entity in and of itself, as a collection of $n^{2}$ numbers arranged in a table, or as a list of column vectors:

 $A=\begin{bmatrix}a_{11}&a_{12}&\ldots&a_{1n}\\ a_{21}&a_{22}&\ldots&a_{2n}\\ \vdots&\vdots&\ddots&\vdots\\ a_{n1}&a_{n2}&\ldots&a_{nn}\end{bmatrix}=\begin{bmatrix}\mathbf{a}_{1},\mathbf% {a}_{2},\ldots,\mathbf{a}_{n}\end{bmatrix},$

where

 $\mathbf{a}_{1}=\begin{bmatrix}a_{11}\\ \vdots\\ a_{n1}\end{bmatrix}=a_{11}\mathbf{e}_{1}+\cdots a_{n1}\mathbf{e}_{n},\quad% \mathbf{a}_{2}=\begin{bmatrix}a_{12}\\ \vdots\\ a_{n2}\end{bmatrix}=a_{12}\mathbf{e}_{1}+\cdots a_{n2}\mathbf{e}_{n},\quad% \text{etc.}$

Correspondingly, we employ the following notation for the determinant:

 $\det(A)=\left|\begin{matrix}a_{11}&\ldots&a_{1n}\\ \vdots&\ddots&\vdots\\ a_{n1}&\ldots&a_{nn}\end{matrix}\right|=\left|\begin{matrix}\mathbf{a}_{1},% \ldots,\mathbf{a}_{n}\end{matrix}\right|.$

## 3 Defining properties.

The determinant operation obeys certain key properties. The correct application of these rules allows us to evaluate the determinant of a given square matrix. For the sake of simplicity, we describe these rules for the case of the $2\times 2$ and $3\times 3$ determinants. Determinants of larger sizes obey analogous rules.

1. 1.

Multi-linearity. The determinant operation is linear in each of its vector arguments.

1. (a)

The determinant distributes over addition. Thus, for a $3\times 3$ matrix $A$ and a column vector $\mathbf{b}\in\mathbb{R}^{3}$, we have

 $\displaystyle\left|\begin{matrix}\mathbf{a}_{1}+\mathbf{b},\mathbf{a}_{2},% \mathbf{a}_{3}\end{matrix}\right|$ $\displaystyle=\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a}_{3}% \end{matrix}\right|+\left|\begin{matrix}\mathbf{b},\mathbf{a}_{2},\mathbf{a}_{% 3}\end{matrix}\right|$ $\displaystyle\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{2}+\mathbf{b},% \mathbf{a}_{3}\end{matrix}\right|$ $\displaystyle=\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a}_{3}% \end{matrix}\right|+\left|\begin{matrix}\mathbf{a}_{1},\mathbf{b},\mathbf{a}_{% 3}\end{matrix}\right|$ $\displaystyle\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a}_{3}+% \mathbf{b}\end{matrix}\right|$ $\displaystyle=\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a}_{3}% \end{matrix}\right|+\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{% b}\end{matrix}\right|$

Thus, if $A,B$ are two $2\times 2$ matrices the determinant of the sum $A+B$ will have a total of $4$ terms (think FOIL):

 $\displaystyle\det(A+B)$ $\displaystyle=\left|\begin{matrix}\mathbf{a}_{1}+\mathbf{b}_{1},\mathbf{a}_{2}% +\mathbf{b}_{2}\end{matrix}\right|$ $\displaystyle=\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{2}\end{matrix}% \right|+\left|\begin{matrix}\mathbf{a}_{1},\mathbf{b}_{2}\end{matrix}\right|+% \left|\begin{matrix}\mathbf{b}_{1},\mathbf{a}_{2}\end{matrix}\right|+\left|% \begin{matrix}\mathbf{b}_{1},\mathbf{b}_{2}\end{matrix}\right|$ $\displaystyle=\det(A)+\left|\begin{matrix}\mathbf{a}_{1},\mathbf{b}_{2}\end{% matrix}\right|+\left|\begin{matrix}\mathbf{b}_{1},\mathbf{a}_{2}\end{matrix}% \right|+\det(B)$

Warning: the formula $\det(A+B)=\det(A)+\det(B)$ is most certainly wrong; it has the F and the L terms from FOIL, but is missing the O and the I terms. Remember, the determinant is not linear; it’s multi-linear!

2. (b)

Scaling one column of a matrix, scales the determinant by the same amount. Thus, for a scalar $k\in\mathbb{R}$, we have

 $\left|\begin{matrix}k\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a}_{3}\end{matrix}% \right|=k\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a}_{3}\end{% matrix}\right|=k\det(A).$

Similarly,

 $\left|\begin{matrix}\mathbf{a}_{1},k\mathbf{a}_{2},\mathbf{a}_{3}\end{matrix}% \right|=\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{2},k\mathbf{a}_{3}\end{% matrix}\right|=k\det(A).$

Let’s see what happens to the determinant if we scale the entire matrix:

 $\displaystyle\det(kA)$ $\displaystyle=\left|\begin{matrix}k\mathbf{a}_{1},k\mathbf{a}_{2},k\mathbf{a}_% {3}\end{matrix}\right|$ $\displaystyle=k\left|\begin{matrix}\mathbf{a}_{1},k\mathbf{a}_{2},k\mathbf{a}_% {3}\end{matrix}\right|=k^{2}\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{2},% k\mathbf{a}_{3}\end{matrix}\right|=k^{3}\left|\begin{matrix}\mathbf{a}_{1},% \mathbf{a}_{2},\mathbf{a}_{3}\end{matrix}\right|$ $\displaystyle=k^{3}\det(A).$
3. (c)

A matrix with a zero column has a zero determinant. For example,

 $\left|\begin{matrix}\boldsymbol{0},\mathbf{a}_{2},\mathbf{a}_{3}\end{matrix}% \right|=\left|\begin{matrix}0&a_{12}&a_{13}\\ 0&a_{22}&a_{23}\\ 0&a_{32}&a_{33}\end{matrix}\right|=0.$
2. 2.

Skew-symmetry. A multi-variable function or formula is called symmetric if it does not depend on the order of the arguments/variables. For example, ordinary addition and multiplication are symmetric operations. A multi-variable function is said to be skew-symmetric if changing the order of any two arguments changes the sign of the operation. The 3-dimensional cross-product is an example of a skew-symmetric operation:

 $\mathbf{u}\times\mathbf{v}=-\mathbf{v}\times\mathbf{u},\quad\mathbf{u},\mathbf% {v}\in\mathbb{R}^{3}.$

Likewise, the $n\times n$ determinant is a skew-symmetric operation, albeit one with $n$ arguments.

Thus, for a $2\times 2$ matrix $A$ we have

 $\det(A)=\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{2}\end{matrix}\right|=-% \left|\begin{matrix}\mathbf{a}_{2},\mathbf{a}_{1}\end{matrix}\right|.$

There are six possible ways to rearrange the columns of a $3\times 3$ matrix. Correspondingly, for a $3\times 3$ matrix $A$ we have

 $\displaystyle\det(A)$ $\displaystyle=\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a}_{3}% \end{matrix}\right|$ $\displaystyle=-\left|\begin{matrix}\mathbf{a}_{2},\mathbf{a}_{1},\mathbf{a}_{3% }\end{matrix}\right|=-\left|\begin{matrix}\mathbf{a}_{3},\mathbf{a}_{2},% \mathbf{a}_{1}\end{matrix}\right|=-\left|\begin{matrix}\mathbf{a}_{1},\mathbf{% a}_{3},\mathbf{a}_{2}\end{matrix}\right|$ $\displaystyle=+\left|\begin{matrix}\mathbf{a}_{2},\mathbf{a}_{3},\mathbf{a}_{1% }\end{matrix}\right|=+\left|\begin{matrix}\mathbf{a}_{3},\mathbf{a}_{1},% \mathbf{a}_{2}\end{matrix}\right|$

The determinants in the 3rd line are equal to $\det(A)$ because, in each case, the matrices in question differ from $A$ by 2 column exchanges.

Skew-symmetry of the determinant operation has an important consequence: a matrix with two identical columns has zero determinant. Consider, for example a $3\times 3$ matrix $A$ with identical first and second columns. By skew-symmetry, if we exchange the first two columns we negate the value of the determinant:

 $\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{1},\mathbf{a}_{3}\end{matrix}% \right|=-\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{1},\mathbf{a}_{3}\end{% matrix}\right|.$

Therefore, $\det(A)$ is equal to its negation. This can only mean that $\det(A)=0$.

3. 3.

The identity rule. The determinant of the identity matrix is equal to $1$. Written out in symbols for the $3\times 3$ case, this means that

 $\det(I_{3})=\left|\begin{matrix}\mathbf{e}_{1},\mathbf{e}_{2},\mathbf{e}_{3}% \end{matrix}\right|=1.$

## 4 Evaluation.

The above properties dictate the rules by which we evaluate a determinant. Overall, the process is very reminiscent of binomial expansion — the algebra that goes into expanding an expression like $(x+y)^{3}$. The essential difference in evaluating determinants is that for scalar algebra the order of the variables is unimportant: $x\cdot y=y\cdot x$. However, when we evaluate determinants, the choice of order can introduce a minus sign, or result in a zero answer, if some of the arguments are repeated.

Let’s see how evaluation works for $2\times 2$ determinants.

 $\displaystyle\left|\begin{matrix}a&b\\ c&d\end{matrix}\right|$ $\displaystyle=\left|\begin{matrix}a\,\mathbf{e}_{1}+c\,\mathbf{e}_{2},b\,% \mathbf{e}_{1}+d\,\mathbf{e}_{2}\end{matrix}\right|$ $\displaystyle=ab\left|\begin{matrix}\mathbf{e}_{1},\mathbf{e}_{1}\end{matrix}% \right|+ad\left|\begin{matrix}\mathbf{e}_{1},\mathbf{e}_{2}\end{matrix}\right|% +cb\left|\begin{matrix}\mathbf{e}_{2},\mathbf{e}_{1}\end{matrix}\right|+cd% \left|\begin{matrix}\mathbf{e}_{2},\mathbf{e}_{2}\end{matrix}\right|$ $\displaystyle=0+ad\left|\begin{matrix}\mathbf{e}_{1},\mathbf{e}_{1}\end{matrix% }\right|-bc\left|\begin{matrix}\mathbf{e}_{1},\mathbf{e}_{2}\end{matrix}\right% |+0$ $\displaystyle=(ad-bc)\left|\begin{matrix}\mathbf{e}_{1},\mathbf{e}_{2}\end{% matrix}\right|$ $\displaystyle=ad-bc$

In the above evaluation we used skew-symmetry 3 times:

 $\left|\begin{matrix}\mathbf{e}_{1},\mathbf{e}_{1}\end{matrix}\right|=0,\quad% \left|\begin{matrix}\mathbf{e}_{2},\mathbf{e}_{2}\end{matrix}\right|=0,\quad% \left|\begin{matrix}\mathbf{e}_{2},\mathbf{e}_{1}\end{matrix}\right|=-\left|% \begin{matrix}\mathbf{e}_{1},\mathbf{e}_{2}\end{matrix}\right|.$

At the very end, we also used the identity rule. It is useful to contrast the above manipulations with the expansion of

 $(ax+cy)(bx+dy)=abx^{2}+(ac+bd)xy+cdy^{2}.$

The distributive step is the same. The different answers arise because ordinary multiplication is a symmetric operation.

Next, let’s see how to evaluate a $3\times 3$ determinant.

 $\displaystyle\left|\begin{matrix}a_{1}&b_{1}&c_{1}\\ a_{2}&b_{2}&c_{2}\\ a_{3}&b_{3}&c_{3}\end{matrix}\right|$ $\displaystyle=\left|\begin{matrix}a_{1}\mathbf{e}_{1}+a_{2}\mathbf{e}_{2}+a_{3% }\mathbf{e}_{3},b_{1}\mathbf{e}_{1}+b_{2}\mathbf{e}_{2}+b_{3}\mathbf{e}_{3},c_% {1}\mathbf{e}_{1}+c_{2}\mathbf{e}_{2}+c_{3}\mathbf{e}_{3}\end{matrix}\right|$

The symmetric analogue would be an expansion of the form

 $(a_{1}x+a_{2}y+a_{3}z)(b_{1}x+b_{2}y+b_{3}z)(c_{1}x+c_{2}y+c_{3}z)=a_{1}b_{1}c% _{1}x^{3}+\cdots$

In both cases, if we were to expand fully, we would get an expression involving $3\cdot 3\cdot 3=27$ terms. However, skew-symmetry tells us that a determinant that involves repeated standard vectors will equal to zero. Thus, from the 27 terms we only keep the 6 terms that involve all three standard standard vectors:

 $\displaystyle\left|\begin{matrix}a_{1}&b_{1}&c_{1}\\ a_{2}&b_{2}&c_{2}\\ a_{3}&b_{3}&c_{3}\end{matrix}\right|$ $\displaystyle=a_{1}b_{2}c_{3}\left|\begin{matrix}\mathbf{e}_{1},\mathbf{e}_{2}% ,\mathbf{e}_{3}\end{matrix}\right|+a_{2}b_{3}c_{1}\left|\begin{matrix}\mathbf{% e}_{2},\mathbf{e}_{3},\mathbf{e}_{1}\end{matrix}\right|+a_{3}b_{1}c_{2}\left|% \begin{matrix}\mathbf{e}_{3},\mathbf{e}_{1},\mathbf{e}_{2}\end{matrix}\right|+$ $\displaystyle\quad+a_{1}b_{3}c_{2}\left|\begin{matrix}\mathbf{e}_{1},\mathbf{e% }_{3},\mathbf{e}_{2}\end{matrix}\right|+a_{2}b_{1}c_{3}\left|\begin{matrix}% \mathbf{e}_{2},\mathbf{e}_{1},\mathbf{e}_{3}\end{matrix}\right|+a_{3}b_{2}c_{1% }\left|\begin{matrix}\mathbf{e}_{3},\mathbf{e}_{2},\mathbf{e}_{1}\end{matrix}\right|$

Using skew symmetry once again (one flip gives a minus sign; two flips give a plus sign), and the identity rule we obtain

 $\displaystyle\left|\begin{matrix}a_{1}&b_{1}&c_{1}\\ a_{2}&b_{2}&c_{2}\\ a_{3}&b_{3}&c_{3}\end{matrix}\right|$ $\displaystyle=a_{1}b_{2}c_{3}+a_{2}b_{3}c_{1}+a_{3}b_{1}c_{2}-a_{1}b_{3}c_{2}-% a_{2}b_{1}c_{3}-a_{3}b_{2}c_{1}$

Notice that each of the 6 terms in the above expression involves an entry from all 3 rows and from all 3 columns; it’s a sudoku kind of thing. The choice of sign depends on the number of column flips that takes the corresponding list of standard vectors to the identity matrix; an even number of flips gives a $+$, an odd number a $-1$.

## 5 Row operations.

Above, we observed that the terms in the final expression treat rows and columns on an equal footing.

###### Theorem 1.

Let $A$ be a square matrix. Then, $\det(A)=\det(A^{T})$.

In other words, transposing a matrix does not alter the value of its determinant. One consequence of the row-column symmetry is that everything one can say about determinants in terms of columns, one can say using rows. In particular, we can state some useful facts about the effect of row operations on the value of a determinant.

Again, for the sake of simplicity we focus on the $3\times 3$ case. Let $A$ be a $3\times 3$ matrix, expressed as a column of row vectors. The hats above the symbols are there to remind us that we are dealing with row rather than column vectors.

 $\displaystyle A=\begin{bmatrix}a_{11}&a_{12}&a_{13}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33}\end{bmatrix}=\begin{bmatrix}\hat{\mathbf{a}}_{1}\\ \hat{\mathbf{a}}_{2}\\ \hat{\mathbf{a}}_{3}\end{bmatrix};$ $\displaystyle\hat{\mathbf{a}}_{1}=\begin{bmatrix}a_{11},a_{12},a_{13}\end{% bmatrix},\quad\hat{\mathbf{a}}_{2}=\begin{bmatrix}a_{21},a_{22},a_{23}\end{% bmatrix},\quad\hat{\mathbf{a}}_{3}=\begin{bmatrix}a_{31},a_{32},a_{33}\end{% bmatrix}.$

Below we list the effect of each of the 3 types of elementary row operations on the determinant of a matrix, and a provide an explanation.

1. 1.

Row replacements do not affect the value of the determinant. Consider, for example the effect of a row replacement operation $R_{2}\to R_{2}+kR_{1}$. Here $k$ is a scalar and $E$ is an elementary matrix that encodes the row operation in question:

 $\displaystyle E=\begin{bmatrix}1&0&0\\ k&1&0\\ 0&0&1\end{bmatrix},\qquad\det(EA)=\left|\begin{matrix}\hat{\mathbf{a}}_{1}\\ \hat{\mathbf{a}}_{2}+k\hat{\mathbf{a}}_{1}\\ \hat{\mathbf{a}}_{3}\end{matrix}\right|=\left|\begin{matrix}\hat{\mathbf{a}}_{% 1}\\ \hat{\mathbf{a}}_{2}\\ \hat{\mathbf{a}}_{3}\end{matrix}\right|+k\left|\begin{matrix}\hat{\mathbf{a}}_% {1}\\ \hat{\mathbf{a}}_{1}\\ \hat{\mathbf{a}}_{3}\end{matrix}\right|=\det(A).$
2. 2.

Row scaling operations scale the determinant by the same factor. Consider, for example, the operation $R_{2}\to kR_{2}$:

 $\displaystyle E$ $\displaystyle=\begin{bmatrix}1&0&0\\ 0&k&0\\ 0&0&1\end{bmatrix},\qquad\det(EA)=\left|\begin{matrix}\hat{\mathbf{a}}_{1}\\ k\hat{\mathbf{a}}_{2}\\ \hat{\mathbf{a}}_{3}\end{matrix}\right|=k\left|\begin{matrix}\hat{\mathbf{a}}_% {1}\\ \hat{\mathbf{a}}_{2}\\ \hat{\mathbf{a}}_{3}\end{matrix}\right|=k\det(A)$
3. 3.

A row exchange operation negates the determinant. Consider, for example,the exchange of rows $1$ and $3$:

 $\displaystyle E$ $\displaystyle=\begin{bmatrix}0&0&1\\ 0&1&0\\ 1&0&0\end{bmatrix},\qquad\det(EA)=\left|\begin{matrix}\hat{\mathbf{a}}_{3}\\ \hat{\mathbf{a}}_{2}\\ \hat{\mathbf{a}}_{1}\end{matrix}\right|=-\left|\begin{matrix}\hat{\mathbf{a}}_% {1}\\ \hat{\mathbf{a}}_{2}\\ \hat{\mathbf{a}}_{3}\end{matrix}\right|=-\det(A)$

Above, if we take $A$ to be the identity matrix, we obtain the value for the determinant of an elementary matrix. We summarize as follows.

###### Theorem 2.

Let $E$ be an elementary matrix. Then,

 $\det(EA)=\det(E)\det(A)$

where

 $\det(E)=\begin{cases}1&\text{if E encodes a row replacement;}\\ k&\text{if E encodes a scaling of a particular row by a factor of k\neq 0;}\\ -1&\text{if E encodes the exchange of two rows.}\end{cases}$

Of course, a sequence of elementary row operations can be used to transform every matrix into reduced echelon form. This is a useful observation, because it gives us an alternative method for computing determinants; namely, we can compute the determinant of an $n\times n$ square matrix $A$ by row reducing $A$ to row-echelon form. Let’s summarize this process by writing

 $\displaystyle E_{k}\cdots E_{1}A$ $\displaystyle=U,$ $\displaystyle\det(E_{k})\cdots\det(E_{1})\det(A)$ $\displaystyle=\det(U),$

where $E_{1},\ldots,E_{k}$ is a sequence of elementary row operations, and where $U$ is a matrix in reduced echelon form. If $A$ is singular, then the bottom row of $U$ will consist of zeros, and hence $\det(U)=0$. Since the determinant of an elementary matrix is never zero, this implies that $\det(A)=0$, also.

If $A$ is invertible, then $U$ is the identity matrix, and hence

 $\det(E_{k})\cdots\det(E_{1})\det(A)=1.$

Since the determinant of an elementary matrix is explicitly known (see Theorem 2), this gives us a way of calculating $\det(A)$. We are also in a position to prove some important theorems.

###### Theorem 3.

Let $A$ be a square matrix. Then, $\det(A)=0$ if and only if $A$ is singular.

###### Theorem 4.

Let $A,B$ be $n\times n$ matrices. Then,

 $\det(AB)=\det(A)\det(B).$

Proof. As above, Let $E_{1},\ldots,E_{k}$ be the elementary matrices that row reduce $A$ to reduced echelon form;

 $E_{k}\cdots E_{1}A=U.$

Above, we showed that

 $\det(UB)=\det(E_{k}\cdots E_{1}AB)=\det(E_{k})\cdots\det(E_{1})\det(AB).$

If $A$ is singular, then the bottom row of $U$ will be zero, and so will the bottom row of $UB$. Hence, in this case, $\det(AB)=0=\det(A)\det(B)$, because $\det(A)=0$, also. Suppose then that $A$ is invertible. This means that $U$ is the identity matrix, and hence

 $\det(B)=\det(E_{k})\cdots\det(E_{1})\det(AB).$

However,

 $\det(E_{k})\cdots\det(E_{1})=1/\det(A),$

and the desired conclusion follows.

###### Theorem 5.

Let $A$ be an invertible matrix. Then

 $\det(A^{-1})=1/\det(A).$

Proof. By the above theorem,

 $\displaystyle 1$ $\displaystyle=\det(I)=\det(AA^{-1})$ $\displaystyle=\det(A)\det(A^{-1}).$

## 6 Cofactor expansion.

Cofactor expansion is another method for evaluating determinants. It organizes the computation of larger determinants, and can be useful in calculating the determinants of matrices containing zero entries. At this point, we introduce some useful jargon. Given an $n\times n$ matrix $A$, we call the $(n-1)\times(n-1)$ matrix obtained by deleting row $i$ and column $j$ the $ij$ minor of $A$, and denote it by $A_{ij}$. We also set

 $C_{ij}=(-1)^{i+j}\det(A_{ij})$

and call $C_{ij}$ the $ij$ signed cofactor of $A$. We are going to prove the following.

###### Theorem 6.

Consider a $3\times 3$ matrix

 $A=\begin{bmatrix}a_{11}&a_{12}&a_{13}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33}\end{bmatrix}.$

The determinant of $A$ can be obtained by means of cofactor expansion along the first column:

 $\det(A)=a_{11}C_{11}+a_{21}C_{21}+a_{31}C_{31}=a_{11}\left|\begin{matrix}a_{22% }&a_{23}\\ a_{32}&a_{33}\end{matrix}\right|-a_{21}\left|\begin{matrix}a_{12}&a_{13}\\ a_{32}&a_{33}\end{matrix}\right|+a_{31}\left|\begin{matrix}a_{12}&a_{13}\\ a_{22}&a_{23}\end{matrix}\right|;$

or, along the first row:

 $\det(A)=a_{11}C_{11}+a_{12}C_{12}+a_{13}C_{13}=a_{11}\left|\begin{matrix}a_{22% }&a_{23}\\ a_{32}&a_{33}\end{matrix}\right|-a_{12}\left|\begin{matrix}a_{21}&a_{23}\\ a_{31}&a_{33}\end{matrix}\right|+a_{13}\left|\begin{matrix}a_{21}&a_{22}\\ a_{31}&a_{32}\end{matrix}\right|.$

More generally, the determinant of an $n\times n$ matrix can be obtained by cofactor expansion along any column $j$:

 $\det(A)=a_{1j}C_{1j}+a_{2j}C_{2j}+\cdots+a_{nj}C_{nj},\quad j=1,2,\ldots,n;$

or, along any row $i$:

 $\det(A)=a_{i1}C_{i1}+a_{i2}C_{i2}+\cdots+a_{in}C_{in},\quad i=1,2,\ldots,n.$

The proof works by writing

 $A=\begin{bmatrix}\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a}_{3}\end{bmatrix},% \quad\mathbf{a}_{1}=a_{11}\mathbf{e}_{1}+a_{21}\mathbf{e}_{2}+a_{31}\mathbf{e}% _{3},$

and then using multi-linearity:

 $\displaystyle\left|\begin{matrix}a_{11}&a_{12}&a_{13}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33}\end{matrix}\right|$ $\displaystyle=\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a}_{3}% \end{matrix}\right|$ $\displaystyle=a_{11}\left|\begin{matrix}\mathbf{e}_{1},\mathbf{a}_{2},\mathbf{% a}_{3}\end{matrix}\right|+a_{21}\left|\begin{matrix}\mathbf{e}_{2},\mathbf{a}_% {2},\mathbf{a}_{3}\end{matrix}\right|+a_{31}\left|\begin{matrix}\mathbf{e}_{3}% ,\mathbf{a}_{2},\mathbf{a}_{3}\end{matrix}\right|$ $\displaystyle=a_{11}\left|\begin{matrix}1&a_{12}&a_{13}\\ 0&a_{22}&a_{23}\\ 0&a_{32}&a_{33}\end{matrix}\right|+a_{21}\left|\begin{matrix}0&a_{12}&a_{13}\\ 1&a_{22}&a_{23}\\ 0&a_{32}&a_{33}\end{matrix}\right|+a_{31}\left|\begin{matrix}0&a_{12}&a_{13}\\ 0&a_{22}&a_{23}\\ 1&a_{32}&a_{33}\end{matrix}\right|$ $\displaystyle=a_{11}\left|\begin{matrix}a_{22}&a_{23}\\ a_{32}&a_{33}\end{matrix}\right|-a_{21}\left|\begin{matrix}a_{12}&a_{13}\\ a_{32}&a_{33}\end{matrix}\right|+a_{31}\left|\begin{matrix}a_{12}&a_{23}\\ a_{13}&a_{23}\end{matrix}\right|$ $\displaystyle=a_{11}C_{11}+a_{21}C_{21}+a_{31}C_{31}.$

The intermediate steps, namely

 $\left|\begin{matrix}\mathbf{e}_{1},\mathbf{a}_{2},\mathbf{a}_{3}\end{matrix}% \right|=C_{11},\quad\left|\begin{matrix}\mathbf{e}_{2},\mathbf{a}_{2},\mathbf{% a}_{3}\end{matrix}\right|=C_{21},\quad\left|\begin{matrix}\mathbf{e}_{3},% \mathbf{a}_{2},\mathbf{a}_{3}\end{matrix}\right|=C_{31}.$

need to be explained.

###### Theorem 7.

Let $A=\begin{bmatrix}\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a}_{3},\ldots,\mathbf{a% }_{n}\end{bmatrix}$ be an $n\times n$ matrix. Then,

 $\displaystyle\left|\begin{matrix}\mathbf{e}_{i},\mathbf{a}_{2},\mathbf{a}_{3}% \ldots,\mathbf{a}_{n}\end{matrix}\right|=C_{i1},$ $\displaystyle\left|\begin{matrix}\mathbf{a}_{1},\mathbf{e}_{i},\mathbf{a}_{3},% \ldots,\mathbf{a}_{n}\end{matrix}\right|=C_{i2},$ $\displaystyle\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{e}_{i},% \ldots,\mathbf{a}_{n}\end{matrix}\right|=C_{i3},$ $\displaystyle\qquad\text{etc}$

Proof. Expanding $\left|\begin{matrix}\mathbf{e}_{1},\mathbf{a}_{2},\mathbf{a}_{3}\end{matrix}\right|$ we obtain 9 terms:

 $\left|\begin{matrix}\mathbf{e}_{1},\mathbf{a}_{2},\mathbf{a}_{3}\end{matrix}% \right|=\left|\begin{matrix}\mathbf{e}_{1},a_{12}\mathbf{e}_{1}+a_{22}\mathbf{% e}_{2}+a_{32}\mathbf{e}_{3},a_{13}\mathbf{e}_{1}+a_{23}\mathbf{e}_{2}+a_{33}% \mathbf{e}_{3}\end{matrix}\right|.$

However, there can’t be any terms with a double occurrence of $\mathbf{e}_{1}$, and so we end up evaluating a $2\times 2$ determinant:

 $\displaystyle\left|\begin{matrix}\mathbf{e}_{1},\mathbf{a}_{2},\mathbf{a}_{3}% \end{matrix}\right|$ $\displaystyle=\left|\begin{matrix}\mathbf{e}_{1},a_{22}\mathbf{e}_{2}+a_{32}% \mathbf{e}_{3},a_{23}\mathbf{e}_{2}+a_{33}\mathbf{e}_{3}\end{matrix}\right|$ $\displaystyle=a_{22}a_{33}\left|\begin{matrix}\mathbf{e}_{1},\mathbf{e}_{2},% \mathbf{e}_{3}\end{matrix}\right|+a_{22}a_{23}\left|\begin{matrix}\mathbf{e}_{% 1},\mathbf{e}_{2},\mathbf{e}_{2}\end{matrix}\right|+a_{32}a_{23}\left|\begin{% matrix}\mathbf{e}_{1},\mathbf{e}_{3},\mathbf{e}_{2}\end{matrix}\right|+a_{32}a% _{33}\left|\begin{matrix}\mathbf{e}_{1},\mathbf{e}_{3},\mathbf{e}_{3}\end{% matrix}\right|$ $\displaystyle=\left|\begin{matrix}a_{22}&a_{23}\\ a_{32}&a_{33}\end{matrix}\right|$ $\displaystyle=C_{11}$

Similarly,

 $\displaystyle\left|\begin{matrix}\mathbf{e}_{2},\mathbf{a}_{2},\mathbf{a}_{3}% \end{matrix}\right|=-\left|\begin{matrix}\mathbf{a}_{2},\mathbf{e}_{2},\mathbf% {a}_{3}\end{matrix}\right|=-\left|\begin{matrix}a_{12}&0&a_{13}\\ a_{22}&1&a_{23}\\ a_{32}&0&a_{33}\end{matrix}\right|=-\left|\begin{matrix}a_{12}&a_{13}\\ a_{32}&a_{33}\end{matrix}\right|=C_{21}.$ $\displaystyle\left|\begin{matrix}\mathbf{e}_{3},\mathbf{a}_{2},\mathbf{a}_{3}% \end{matrix}\right|=-\left|\begin{matrix}\mathbf{a}_{2},\mathbf{e}_{3},\mathbf% {a}_{3}\end{matrix}\right|=+\left|\begin{matrix}\mathbf{a}_{2},\mathbf{a}_{3},% \mathbf{e}_{3}\end{matrix}\right|=+\left|\begin{matrix}a_{12}&a_{13}&0\\ a_{22}&a_{23}&0\\ a_{32}&a_{33}&1\end{matrix}\right|=\left|\begin{matrix}a_{12}&a_{13}\\ a_{22}&a_{23}\end{matrix}\right|=C_{31}.$

This argument generalizes to matrices of arbitrary size.

Next, by way of example, let’s consider the expansion of a $3\times 3$ matrix along the 2nd column:

 $\displaystyle\left|\begin{matrix}a_{11}&a_{12}&a_{13}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33}\end{matrix}\right|$ $\displaystyle=\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a}_{3}% \end{matrix}\right|=\left|\begin{matrix}\mathbf{a}_{1},a_{12}\mathbf{e}_{1}+a_% {22}\mathbf{e}_{2}+a_{32}\mathbf{e}_{3},\mathbf{a}_{3}\end{matrix}\right|$ $\displaystyle=a_{12}\left|\begin{matrix}\mathbf{a}_{1},\mathbf{e}_{1},\mathbf{% a}_{3}\end{matrix}\right|+a_{22}\left|\begin{matrix}\mathbf{a}_{1},\mathbf{e}_% {2},\mathbf{a}_{3}\end{matrix}\right|+a_{32}\left|\begin{matrix}\mathbf{a}_{1}% ,\mathbf{e}_{3},\mathbf{a}_{3}\end{matrix}\right|$ $\displaystyle=a_{12}\left|\begin{matrix}a_{11}&1&a_{13}\\ a_{21}&0&a_{23}\\ a_{31}&0&a_{33}\end{matrix}\right|+a_{22}\left|\begin{matrix}a_{11}&0&a_{13}\\ a_{21}&1&a_{23}\\ a_{31}&0&a_{33}\end{matrix}\right|+a_{32}\left|\begin{matrix}a_{11}&0&a_{13}\\ a_{21}&0&a_{23}\\ a_{31}&1&a_{33}\end{matrix}\right|$ $\displaystyle=a_{12}\left|\begin{matrix}a_{21}&a_{23}\\ a_{31}&a_{33}\end{matrix}\right|-a_{22}\left|\begin{matrix}a_{11}&a_{13}\\ a_{31}&a_{33}\end{matrix}\right|+a_{32}\left|\begin{matrix}a_{11}&a_{13}\\ a_{21}&a_{23}\end{matrix}\right|$ $\displaystyle=a_{12}C_{12}+a_{22}C_{22}+a_{32}C_{32}.$

The above argument generalizes to expansions along any column, and indeed to expansions of a matrix of arbitrary size. For example, for a $4\times 4$ matrix we write

 $\mathbf{a}_{1}=a_{11}\mathbf{e}_{1}a_{21}\mathbf{e}_{2}+a_{31}\mathbf{e}_{3}+a% _{41}\mathbf{e}_{4}$

and use multi-linearity to obtain

 $\displaystyle\left|\begin{matrix}a_{11}&a_{12}&a_{13}&a_{14}\\ a_{21}&a_{22}&a_{23}&a_{24}\\ a_{31}&a_{32}&a_{33}&a_{34}\\ a_{41}&a_{42}&a_{43}&a_{44}\end{matrix}\right|=\left|\begin{matrix}\mathbf{a}_% {1},\mathbf{a}_{2},\mathbf{a}_{3},\mathbf{a}_{4}\end{matrix}\right|$ $\displaystyle\quad=a_{11}\left|\begin{matrix}\mathbf{e}_{1},\mathbf{a}_{2},% \mathbf{a}_{3},\mathbf{a}_{4}\end{matrix}\right|+a_{21}\left|\begin{matrix}% \mathbf{e}_{2},\mathbf{a}_{2},\mathbf{a}_{3},\mathbf{a}_{4}\end{matrix}\right|% +a_{31}\left|\begin{matrix}\mathbf{e}_{3},\mathbf{a}_{2},\mathbf{a}_{3},% \mathbf{a}_{4}\end{matrix}\right|+a_{41}\left|\begin{matrix}\mathbf{e}_{4},% \mathbf{a}_{2},\mathbf{a}_{3},\mathbf{a}_{4}\end{matrix}\right|$ $\displaystyle\quad=a_{11}C_{11}+a_{21}C_{21}+a_{31}C_{31}+a_{41}C_{41}.$

Working with row vectors, the same argument also establishes the validity of cofactor expansion along rows. For example, here is the derivation of cofactor expansion along the 2nd row of a $3\times 3$ matrix:

 $\displaystyle\left|\begin{matrix}a_{11}&a_{12}&a_{13}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33}\end{matrix}\right|$ $\displaystyle=\left|\begin{matrix}\hat{\mathbf{a}}_{1}\\ \hat{\mathbf{a}}_{2}\\ \hat{\mathbf{a}}_{3}\end{matrix}\right|=\left|\begin{matrix}\hat{\mathbf{a}}_{% 1}\\ a_{21}\hat{\mathbf{e}}_{1}+a_{22}\hat{\mathbf{e}}_{2}+a_{23}\hat{\mathbf{e}}_{% 3}\\ \hat{\mathbf{a}}_{3}\end{matrix}\right|$ $\displaystyle=a_{21}\left|\begin{matrix}\hat{\mathbf{a}}_{1}\\ \hat{\mathbf{e}}_{1}\\ \hat{\mathbf{a}}_{3}\end{matrix}\right|+a_{22}\left|\begin{matrix}\hat{\mathbf% {a}}_{1}\\ \hat{\mathbf{e}}_{2}\\ \hat{\mathbf{a}}_{3}\end{matrix}\right|+a_{23}\left|\begin{matrix}\hat{\mathbf% {a}}_{1}\\ \hat{\mathbf{e}}_{3}\\ \hat{\mathbf{a}}_{3}\end{matrix}\right|$ $\displaystyle=a_{21}C_{21}+a_{22}C_{22}+a_{23}C_{23}.$

Here $\hat{\mathbf{e}}_{i}=\mathbf{e}_{i}^{T}$ denote the elementary row vectors.

Here is a useful theorem about determinants that can be proved using cofactor expansions.

###### Theorem 8.

The determinant of an upper triangular matrix is the product of the diagonal entries.

Consider, for example the determinant of the following $4\times 4$ upper triangular matrix; the stars indicate an arbitrary number. We repeatedly use cofactor expansion along the first column. The multiple zeros mean that, each time, the cofactor expansion has only one term.

 $\displaystyle\left|\begin{matrix}a&*&*&*\\ 0&b&*&*&\\ 0&0&c&*\\ 0&0&0&d\end{matrix}\right|$ $\displaystyle=a\left|\begin{matrix}b&*&*\\ 0&c&*\\ 0&0&d\end{matrix}\right|$ $\displaystyle=ab\left|\begin{matrix}c&*\\ 0&d\end{matrix}\right|$ $\displaystyle=abcd$

The cofactors $C_{ij}$ of an $n\times n$ matrix $A$ can be arranged into an $n\times n$ matrix, called $\operatorname{adj}A$, the adjugate of $A$. In the $3\times 3$ case we define

 $\operatorname{adj}A=\begin{bmatrix}C_{11}&C_{21}&C_{31}\\ C_{12}&C_{22}&C_{32}\\ C_{13}&C_{23}&C_{33}\end{bmatrix}.$

Note that the entries of $\operatorname{adj}A$ are indexed differently than the entries of $A$. For $A$, the entry in the $i$-th row and $j$-th column is denoted by $a_{ij}$. However the cofactor $C_{ij}$ is placed in row $j$ and column $i$. Remarkably, the matrix of cofactors $\operatorname{adj}A$ is closely related to the inverse of $A$.

###### Theorem 9.

Let $A$ be an $n\times n$ matrix. Then, $A\operatorname{adj}A=\det(A)I$. Furthermore, if $A$ is invertible, then $A^{-1}=(1/\det(A))\operatorname{adj}A$.

Let’s consider the proof for the $3\times 3$ case. We aim to show that

 $\begin{bmatrix}C_{11}&C_{21}&C_{31}\\ C_{12}&C_{22}&C_{32}\\ C_{13}&C_{23}&C_{33}\end{bmatrix}\begin{bmatrix}a_{11}&a_{12}&a_{13}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33}\end{bmatrix}=\begin{bmatrix}\det(A)&0&0\\ 0&\det(A)&0\\ 0&0&\det(A)\end{bmatrix}$

Writing $A=\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a}_{3}\end{matrix}\right|$, using the properties of the determinant and Theorem 7

 $\displaystyle\det(A)=\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{2},\mathbf% {a}_{3}\end{matrix}\right|$ $\displaystyle=a_{11}\left|\begin{matrix}\mathbf{e}_{1},\mathbf{a}_{2},\mathbf{% a}_{3}\end{matrix}\right|+a_{21}\left|\begin{matrix}\mathbf{e}_{2},\mathbf{a}_% {2},\mathbf{a}_{3}\end{matrix}\right|+a_{31}\left|\begin{matrix}\mathbf{e}_{3}% ,\mathbf{a}_{2},\mathbf{a}_{3}\end{matrix}\right|$ $\displaystyle=a_{11}C_{11}+a_{21}C_{21}+a_{31}C_{31}.$ $\displaystyle 0=\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{1},\mathbf{a}_{% 3}\end{matrix}\right|$ $\displaystyle=a_{11}\left|\begin{matrix}\mathbf{a}_{1},\mathbf{e}_{1},\mathbf{% a}_{3}\end{matrix}\right|+a_{21}\left|\begin{matrix}\mathbf{a}_{1},\mathbf{e}_% {2},\mathbf{a}_{3}\end{matrix}\right|+a_{31}\left|\begin{matrix}\mathbf{a}_{1}% ,\mathbf{e}_{3},\mathbf{a}_{3}\end{matrix}\right|$ $\displaystyle=a_{11}C_{12}+a_{21}C_{22}+a_{31}C_{32}.$ $\displaystyle 0=\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a}_{% 1}\end{matrix}\right|$ $\displaystyle=a_{11}\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{% e}_{1}\end{matrix}\right|+a_{21}\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_% {2},\mathbf{e}_{2}\end{matrix}\right|+a_{31}\left|\begin{matrix}\mathbf{a}_{1}% ,\mathbf{a}_{2},\mathbf{e}_{3}\end{matrix}\right|$ $\displaystyle=a_{11}C_{13}+a_{21}C_{23}+a_{31}C_{33}.$

This gives us the first column of the multiplication. To obtain the second column, we observe

 $\displaystyle 0=\left|\begin{matrix}\mathbf{a}_{2},\mathbf{a}_{2},\mathbf{a}_{% 3}\end{matrix}\right|$ $\displaystyle=a_{12}\left|\begin{matrix}\mathbf{e}_{1},\mathbf{a}_{2},\mathbf{% a}_{3}\end{matrix}\right|+a_{22}\left|\begin{matrix}\mathbf{e}_{2},\mathbf{a}_% {2},\mathbf{a}_{3}\end{matrix}\right|+a_{32}\left|\begin{matrix}\mathbf{e}_{3}% ,\mathbf{a}_{2},\mathbf{a}_{3}\end{matrix}\right|$ $\displaystyle=a_{12}C_{11}+a_{22}C_{21}+a_{32}C_{31}.$ $\displaystyle\det(A)=\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{2},\mathbf% {a}_{3}\end{matrix}\right|$ $\displaystyle=a_{12}\left|\begin{matrix}\mathbf{a}_{1},\mathbf{e}_{1},\mathbf{% a}_{3}\end{matrix}\right|+a_{22}\left|\begin{matrix}\mathbf{a}_{1},\mathbf{e}_% {2},\mathbf{a}_{3}\end{matrix}\right|+a_{32}\left|\begin{matrix}\mathbf{a}_{1}% ,\mathbf{e}_{3},\mathbf{a}_{3}\end{matrix}\right|$ $\displaystyle=a_{12}C_{12}+a_{22}C_{22}+a_{32}C_{32}.$ $\displaystyle 0=\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a}_{% 2}\end{matrix}\right|$ $\displaystyle=a_{12}\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{% e}_{1}\end{matrix}\right|+a_{22}\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_% {2},\mathbf{e}_{2}\end{matrix}\right|+a_{32}\left|\begin{matrix}\mathbf{a}_{1}% ,\mathbf{a}_{2},\mathbf{e}_{3}\end{matrix}\right|$ $\displaystyle=a_{12}C_{13}+a_{22}C_{23}+a_{31}C_{33}.$

The values in the 3rd column are established in a similar fashion.

## 8 Cramer’s rule

We can also use determinants and cofactors to solve a linear system $A\mathbf{x}=\mathbf{b}$, where $A$ is an invertible, square matrix. Of course, if $A$ is invertible, then a solution exists and is unique; indeed, $\mathbf{x}=A^{-1}\mathbf{b}$. However, Cramer’s rule allows us to calculate $\mathbf{x}$ directly, without first calculating $A^{-1}$ and then performing a matrix-vector multiplication.

Let’s see how this works for the case of a $3\times 3$ matrix $A=\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a}_{3}\end{matrix}\right|$. Given a $\mathbf{b}\in\mathbb{R}^{3}$, we are searching for the $3$ numbers $x_{1},x_{2},x_{3}$ such that

 $\mathbf{b}=x_{1}\mathbf{a}_{1}+x_{2}\mathbf{a}_{2}+x_{3}\mathbf{a}_{3}.$

Substituting this into the following determinant and expanding produces a useful equation:

 $\displaystyle\left|\begin{matrix}\mathbf{b},\mathbf{a}_{2},\mathbf{a}_{3}\end{% matrix}\right|$ $\displaystyle=\left|\begin{matrix}x_{1}\mathbf{a}_{1}+x_{2}\mathbf{a}_{2}+x_{3% }\mathbf{a}_{3},\mathbf{a}_{2},\mathbf{a}_{3}\end{matrix}\right|$ $\displaystyle=x_{1}\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a% }_{3}\end{matrix}\right|+x_{2}\left|\begin{matrix}\mathbf{a}_{2},\mathbf{a}_{2% },\mathbf{a}_{3}\end{matrix}\right|+x_{3}\left|\begin{matrix}\mathbf{a}_{3},% \mathbf{a}_{2},\mathbf{a}_{3}\end{matrix}\right|$ $\displaystyle=x_{1}\det(A)+0+0.$

Similarly

 $\displaystyle\left|\begin{matrix}\mathbf{a}_{1},\mathbf{b},\mathbf{a}_{3}\end{% matrix}\right|=x_{1}\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{1},\mathbf{% a}_{3}\end{matrix}\right|+x_{2}\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{% 2},\mathbf{a}_{3}\end{matrix}\right|+x_{3}\left|\begin{matrix}\mathbf{a}_{1},% \mathbf{a}_{3},\mathbf{a}_{3}\end{matrix}\right|=x_{2}\det(A)$ $\displaystyle\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{b}\end{% matrix}\right|=x_{1}\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{% a}_{1}\end{matrix}\right|+x_{2}\left|\begin{matrix}\mathbf{a}_{1},\mathbf{a}_{% 2},\mathbf{a}_{2}\end{matrix}\right|+x_{3}\left|\begin{matrix}\mathbf{a}_{1},% \mathbf{a}_{2},\mathbf{a}_{3}\end{matrix}\right|=x_{3}\det(A)$

Therefore, the desired solution can be obtained as follows:

 $x_{1}=\frac{|\mathbf{b},\mathbf{a}_{2},\mathbf{a}_{3}|}{|\mathbf{a}_{1},% \mathbf{a}_{2},\mathbf{a}_{3}|}\quad x_{2}=\frac{|\mathbf{a}_{1},\mathbf{b},% \mathbf{a}_{3}|}{|\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{a}_{3}|}\quad x_{3}=% \frac{|\mathbf{a}_{1},\mathbf{a}_{2},\mathbf{b}|}{|\mathbf{a}_{1},\mathbf{a}_{% 2},\mathbf{a}_{3}|}$

We generalize and summarize as follows.

###### Theorem 10.

Let $A$ be an invertible $n\times n$ matrix, and $\mathbf{b}\in\mathbb{R}^{n}$ a vector. Then the unique solution to the linear equation $A\mathbf{x}=\mathbf{b}$ is given by

 $x_{i}=\frac{\det A_{i}(\mathbf{b})}{\det A},\quad i=1,2,\ldots,n,$

where $A_{i}(\mathbf{b})$ denotes the matrix obtained by replacing column $i$ of $A$ with $\mathbf{b}$.

Title lecture notes on determinants LectureNotesOnDeterminants 2013-03-22 17:33:37 2013-03-22 17:33:37 rmilson (146) rmilson (146) 6 rmilson (146) Topic msc 15A15