Let's summarize the properties of determinants that we have learned so far: Let [math]A,B[/math] be n x n matrices.[br][list=1][*][math]\det(I)=1[/math][br][/*][*]A determinant of a matrix is "linear" in any of the column vectors of the matrix i.e [math]\det\left(\left[\ \cdots k\mathbf{a} \ \cdots \ \right]\right)=k\det\left(\left[\ \cdots \mathbf{a} \ \cdots \ \right]\right)[/math] and [math]\det\left(\left[\ \cdots \mathbf{a}+\mathbf{b} \ \cdots \ \right]\right)=\det\left(\left[\ \cdots \mathbf{a} \ \cdots \ \right]\right)+\det\left(\left[\ \cdots \mathbf{b} \ \cdots \ \right]\right)[/math].[/*][*][math]\det\left(\left[\ \cdots \mathbf{a} \ \cdots \ \mathbf{b} \ \cdots \ \right]\right)=-\det\left(\left[\ \cdots \mathbf{b} \ \cdots \ \mathbf{a} \ \cdots \ \right]\right)[/math][br][/*][*][math]\det(AB)=\det(A)\det(B)[/math][br][/*][/list][br]We can derive some more properties from above:[br][list][*]Suppose an n x n matrix consists of a column of zeros, then its determinant is zero because by (2), [math]\det\left(\left[\ \cdots \mathbf{0} \ \cdots \ \right]\right)=\det\left(\left[\ \cdots 0\mathbf{0} \ \cdots \ \right]\right)=0\cdot\det\left(\left[\ \cdots \mathbf{0} \ \cdots \ \right]\right)=0[/math].[/*][br][*]Suppose an n x n matrix consists of two identical columns, then its determinant is zero because by (3), [math]\det\left(\left[\ \cdots \mathbf{a} \ \cdots \ \mathbf{a} \ \cdots \ \right]\right)=-\det\left(\left[\ \cdots \mathbf{a} \ \cdots \ \mathbf{a} \ \cdots \ \right]\right)[/math], which implies that [math]\det\left(\left[\ \cdots \mathbf{a} \ \cdots \ \mathbf{a} \ \cdots \ \right]\right)=0[/math].[/*][*]Let [math]A[/math] be an invertible matrix. Then [math]A^{-1}A=I[/math]. By (4) and (1), we have [math]\det(A^{-1})\det(A)=\det(A^{-1}A)=\det(I)=1[/math]. Hence [math]\det(A)\ne 0[/math] and [math]\det(A^{-1})=\frac 1{\det(A)}[/math].[br][/*][/list][br]Recall that the above properties can be used to derive the Leibniz formula for determinants:[br][br][math]\det(A)=\sum_{\sigma} \text{sgn}(\sigma)a_{1\sigma(1)}a_{2\sigma(2)}\cdots a_{n\sigma(n)}[/math][br][br]Now consider [math]A^T[/math]. Then we have[br][br][math]\begin{eqnarray}\det(A^T) & = & \sum_{\sigma} \text{sgn}(\sigma)a_{\sigma(1)1}a_{\sigma(2)2}\cdots a_{\sigma(n)n} \\ & = & \sum_{\sigma} \text{sgn}(\sigma)a_{1\sigma^{-1}(1)}a_{2\sigma^{-1}(2)}\cdots a_{n\sigma^{-1}(n)}\\ & = & \sum_{\sigma^{-1}} \text{sgn}(\sigma^{-1})a_{1\sigma^{-1}(1)}a_{2\sigma^{-1}(2)}\cdots a_{n\sigma^{-1}(n)}\\ & = & \det(A)\end{eqnarray}[/math][br](Note: in the above derivation, we use the fact that [math]\text{sgn}(\sigma^{-1})=\text{sgn}(\sigma)[/math].)[br][br]That is to say, any square matrix and its transpose have the same determinant.[br][br][br]
We have already learned that row operations can be used for computing the inverse of a matrix. In fact, the same procedure can be used for computing the determinant of the matrix as well. First of all, we let [math]E[/math] be an n x n elementary matrix. It is easy to see that[br][br][math]\det(E)=\begin{cases}1 \quad & \text{if} \ E \ \text{is a row replacement}\\ -1 \quad & \text{if} \ E \ \text{is a row interchange}\\ k \quad & \text{if} \ E \ \text{is a row scaling by} \ k\end{cases}[/math][br][br]Given an invertible matrix [math]A[/math], we use a sequence of row operations to transform it into an identity matrix. Let [math]E_1,E_2, \ldots,E_r[/math] be the corresponding elementary matrices. Then we have [math]E_r\cdots E_1A=I[/math] and we have[br][br][math]\det(E_r)\cdots\det(E_1)\det(A)=\det(I)=1[/math][br][math]\Rightarrow \det(A)=\frac1{\det(E_r)\cdots\det(E_1)}[/math][br][br][br]Recall that we used row operations to find the inverse of [math]A=\begin{pmatrix}2 & 7 & 1 \\ 1 & 4 & -1 \\ 1 & 3 & 0\end{pmatrix}[/math]. See [url=https://www.geogebra.org/m/rsamte2c#material/nmxhxpxz]here[/url] for details.[br]In the process, we used 1 row interchange, 6 row replacements and 2 row scalings ([math]-1[/math] and [math]-\frac{1}{2}[/math] were used as factors). Then we have[br][br][math]\det(A)=\frac1{(-1)(-1)(-\frac12)}=-2[/math][br][br][u]Remarks[/u]: [br][list][*]Since [math]\det(A^T)=\det(A)[/math] for any square matrix [math]A[/math], similar results are also valid for "[b]column operations[/b]".[br][/*][*]In fact, to compute the determinant of a matrix [math]A[/math], it is enough to transform into an upper triangular matrix [math]R[/math]. Then [math]\det(A)=\frac{\text{Product of diagonal entries of} \ R}{\det(E_r)\cdots\det(E_1)}[/math].[/*][*][url=http://www.math.odu.edu/~bogacki/cgi-bin/lat.cgi?c=det]Here[/url] is the online tool for calculating determinants in "Linear Algebra Toolkit" developed by P. Bogacki.[br][/*][/list]
If [math]A[/math] is a non-invertible matrix, what can you say about [math]\det(A)[/math]?
If [math]A[/math] is a non-invertible n x n matrix, it can be row reduced to a matrix [math]S[/math] in echelon form such that it has fewer than n pivot positions i.e. it must have at least a row of all zeros, which implies that [math]\det(S)=0[/math]. Suppose [math]E_1,E_2,\ldots,E_r[/math] are the elementary matrices corresponding to the row reduction steps. We have [math]\det(E_1)\det(E_2)\cdots\det(E_r)\det(A)=\det(S)=0[/math]. Hence [math]\det(A)=0[/math].[br][br]Combining with the fact that [math]\det(A)\ne 0[/math] for any invertible matrix [math]A[/math], we have the following:[br][br][math]\det(A)\ne 0[/math] if and only if [math]A[/math] is invertible.