Basis and Dimension

Basis
Analogous to the definition of a basis for [math]\mathbb{R}^n[/math], we can define a basis for a general vector space as follows:[br][br][u]Definition[/u]: A set of vectors [math]\left\{v_1,v_2,\ldots,v_p\right\}[/math] in [math]V[/math] is a [b]basis[/b] for [math]V[/math] if[br][list=1][*] [math]\left\{v_1,v_2,\ldots,v_p\right\}[/math] is a linearly independent set, and[/*][br][*][math]\text{Span}\left\{v_1,v_2,\ldots,v_p\right\}=V[/math][br][/*][/list][br][u]Remark[/u]: The above definition still works well if the set of vectors is an infinite set. But in this course, we will mainly focus on vector spaces having a finite basis.[br][br]There are several important theorems related to basis:[br][br][u]Theorem[/u]: Suppose a vector space [math]V[/math] can be spanned by [math]n[/math] vectors. If any set of [math]m[/math] vectors in [math]V[/math] is linearly independent, then [math]m\leq n[/math].[br][br]Proof: Suppose [math]V=\text{Span}\left\{v_1,v_2,\ldots,v_n\right\}[/math] and suppose that [math]\left\{u_1,u_2,\ldots,u_m\right\}[/math] is a linearly independent set in [math]V[/math]. Then[br][br][math]u_1=c_1v_1+c_2v_2+\cdots+c_nv_n[/math] for some real numbers [math]c_1,c_2,\ldots, c_n[/math].[br][br]Since [math]u_1\ne 0[/math], not all [math]c_i[/math] are zero. Assume [math]c_1\ne 0[/math]. Then we have[br][br][math]v_1=\frac1{c_1}u_1-\frac{c_2}{c_1}v_2-\cdots-\frac{c_n}{c_1}v_n[/math] and it is not hard to see that [math]u_1[/math] can replace [math]v_1[/math] in the spanning set i.e. [math]V=\text{Span}\left\{u_1,v_2,\ldots,v_n\right\}[/math]. Then[br][br][math]u_2=b_1u_1+c_2v_2+\cdots+c_nv_n[/math] for some real numbers [math]b_1,c_2,\ldots, c_n[/math].[br][br]Since [math]\left\{u_1,u_2\right\}[/math] is linearly independent, not all [math]c_i[/math] are zero. Assume [math]c_2\ne 0[/math]. Then by similar argument, [math]u_2[/math] can replace [math]v_2[/math] in the spanning set i.e. [math]V=\text{Span}\left\{u_1,u_2,v_3, \ldots,v_n\right\}[/math].[br][br]Assume [math]m>n[/math], we can repeat the above process until all the vectors [math]v_i[/math] are replaced by vectors [math]u_1,u_2, \ldots, u_n[/math]. Hence [math]V=\text{Span}\left\{u_1,u_2, \ldots,u_n\right\}[/math]. In other words, [math]u_{n+1}[/math] is a linear combination of [math]u_1,u_2, \ldots,u_n[/math]. But this is impossible because [math]\left\{u_1,u_2,\ldots,u_m\right\}[/math] is a linearly independent set in [math]V[/math]. Therefore, [math]m\leq n[/math].[br]
Dimension
If a vector space [math]V[/math] is spanned by a finite set of vectors, then [math]V[/math] is said to be [b]finite-dimensional[/b]. [br][br]Starting from that finite set, if a vector is a linear combination of the others, we can throw it away and the remaining vectors will still span [math]V[/math]. Repeat this process until you cannot throw away any more vector in the set. It means that no vector in the set is a linear combination of others i.e. the set is now linearly independent. Since it still spans [math]V[/math], the set of remaining vectors forms a basis for [math]V[/math].[br][br][u]Basis Theorem[/u]: Suppose [math]\left\{v_1,v_2,\ldots,v_n\right\}[/math] and [math]\left\{u_1,u_2,\ldots,u_m\right\}[/math] are both bases for [math]V[/math]. Then [math]m=n[/math].[br][br]Proof: By the above theorem, [math]m\le n[/math]. Interchange the roles of two bases and apply the theorem again, we have [math]n\le m[/math]. Hence, [math]m=n[/math].[br][br]Thanks to the Basis Theorem, the number of vectors in any basis for [math]V[/math] are the same. We define this number to be the [b]dimension[/b] of [math]V[/math], denoted by [math]\dim V[/math]. By convention, the dimension of the zero vector space is zero.[br][br]The dimension of a vector space can be interpreted as[br][list][*]the smallest number of vectors that can span the vector space, or[/*][*]the largest number of linearly independent vectors in the vector space.[/*][/list][br][br]Dimension is also a number that measures the "size" of a vector space - the greater the dimension, the larger the vector space. Suppose [math]H[/math] is a subspace of [math]V[/math]. We expect that [math]\dim H \leq \dim V[/math]. The following theorem confirms this:[br][br][u]Theorem[/u]: Let [math]V[/math] be a vector space such that [math]\dim V=n[/math] and let [math]U[/math] and [math]W[/math] be subspaces of [math]V[/math]. Then[br][list=1][*][math]U[/math] is finite-dimensional and [math]\dim U\leq n[/math].[br][/*][*]Any basis of [math]U[/math] is part of a basis for [math]V[/math].[/*][*]If [math]U\subset W[/math] and [math]\dim U=\dim W[/math], then [math]U=W[/math].[/*][/list][br](It will be proved in class.)[br]
Some Examples
[u]Example 1[/u]: [math]\left\{1,t,t^2,\ldots, t^n\right\}[/math] is a basis for [math]\mathbb{P}_n[/math]. [math]\dim \mathbb{P}_n=n+1[/math].[br][br][u]Example 2[/u]: Let [math]H=\left\{x\hat{\mathbf{i}}+y\hat{\mathbf{j}}+z\hat{\mathbf{k}}\in \mathbb{R}^3 \ | \ 2x+3y-4z=0\right\}[/math]. We already proved that [math]H[/math] is a subspace of [math]\mathbb{R}^3[/math]. Find a basis for [math]H[/math].[br][br]If you express the equation [math]2x+3y-4z=0[/math] as an augmented matrix [math]\left(2 \ 3\ -4 \ | \ 0\right)[/math]. It is a matrix already in echelon form. And the variables [math]y[/math] and [math]z[/math] are regarded as free variables. Let [math]y=s,z=t[/math]. Then the solutions are [br][br][math]\begin{pmatrix}x\\y\\z\end{pmatrix}=\begin{pmatrix}-\frac32s+2t\\s\\t\end{pmatrix}=s\begin{pmatrix}-\frac32\\1\\0\end{pmatrix}+t\begin{pmatrix}2\\0\\1\end{pmatrix}[/math][br][br]Therefore, [math]H=\text{Span}\left\{\begin{pmatrix}-\frac32\\1\\0\end{pmatrix},\begin{pmatrix}2\\0\\1\end{pmatrix}\right\}[/math] and the two vectors are linearly independent (because one is not a scalar multiple of another). In other words, [math]\left\{\begin{pmatrix}-\frac32\\1\\0\end{pmatrix},\begin{pmatrix}2\\0\\1\end{pmatrix}\right\}[/math] is a basis for [math]H[/math].[br][br][br][u]Example 3[/u]: [math]\left\{\begin{pmatrix}1&0\\0&0\end{pmatrix},\begin{pmatrix}0&1\\0&0\end{pmatrix},\begin{pmatrix}0&0\\1&0\end{pmatrix},\begin{pmatrix}0&0\\0&1\end{pmatrix}\right\}[/math] is a basis for [math]M_{2\times 2}[/math]. [math]\dim M_{2\times 2}=4[/math].[br][br][br][u]Example 4[/u]: For any [math]A[/math] in [math]M_{n\times n}[/math]. We define the [b]trace[/b] of [math]A[/math], denoted by [math]\text{Tr}(A)[/math], to be the sum of all diagonal entries of [math]A[/math]. It can be shown that [math]V=\left\{A\in M_{n\times n} \ | \ \text{Tr}(A)=0\right\}[/math] is a subspace of [math]M_{n\times n}[/math]. Find a basis for [math]V[/math] when [math]n=3[/math] and hence [math]\dim(V)[/math].[br][br][u]Idea[/u]: There are 6 off-diagonal entries that can be chosen freely. As for the diagonal entries, two of them can be chosen freely and the remaining one is determined because the diagonal sum is zero. Therefore, it is expected that [math]\dim(V)=8[/math]. As for the basis, we will find it out in class.[br][br][br]
Exercise
Let [math]W=\left\{p(t)\in \mathbb{P}_3 \ | \ p(1)=0\right\}[/math].[br][list=1][*]Show that [math]W[/math] is a subspace of [math]\mathbb{P}_3[/math].[/*][*]Prove that [math]\left\{(t-1),(t-1)^2,(t-1)^3\right\}[/math] is a basis for [math]W[/math].[/*][/list]
Close

Information: Basis and Dimension