Let [math]T:\mathbb{R}^n\to\mathbb{R}^m[/math] be a linear transformation. We say that [math]T[/math] is [b]injective[/b] (or [b]one-to-one[/b]) if for any vectors [math]u,v[/math] in [math]\mathbb{R}^n[/math], [math]T\left(u\right)=T\left(v\right)\Longrightarrow u=v[/math]. The following theorem give us a convenient way to check the injectivity of a linear transformation.[br][br][u]Theorem[/u]: [math]T[/math] is injective if and only if [math]T(v)=0[/math] implies [math]v=0[/math].[br][br][u]Proof[/u]: [br]([math]\Rightarrow[/math]) Suppose [math]T[/math] is injective. Since we know that [math]T(0)=0[/math], [math]T(v)=0=T(0)[/math] implies [math]v=0[/math] because [math]T[/math] is injective.[br]([math]\Leftarrow[/math]) Suppose [math]u, v[/math] in [math]\mathbb{R}^n[/math] such that [math]T(u)=T(v)[/math]. So [math]T(u-v)=T(u)-T(v)=0.[/math], which implies [math]u-v=0[/math]. In other words, [math]u=v[/math]. Hence, by definition, [math]T[/math] is injective.[br][br]The zero vector that satisfies the equation [math]T(x)=0[/math] (where [math]x[/math] is the unknown vector in the equation) is sometimes called the [b]trivial solution[/b]. Then the above theorem can be rephrased as follows:[br][br][math]T[/math] is injective if and only if [math]T(x)=0[/math] has only the trivial solution.[br][br]This theorem can also be rephrased as follows: Let [math]A=\left[\mathbf{a}_1 \ \mathbf{a}_2 \ \cdots \ \mathbf{a}_n\right][/math] be the matrix for a linear transformation [math]T:\mathbb{R}^n\to\mathbb{R}^m[/math], where [math]\mathbf{a}_j[/math] is the [math]j^{\text{th}}[/math] column vector. Then [math]T[/math] is injective if and only if [math]\left\{\mathbf{a}_1,\mathbf{a}_2, \ldots, \mathbf{a}_n\right\}[/math] is linearly independent. (Why?)[br][br]
Let [math]T:\mathbb{R}^n\to\mathbb{R}^m[/math] be a linear transformation. We say that [math]T[/math] is [b]surjective[/b] (or [b]onto[/b]) if for any vector [math]w[/math] in [math]\mathbb{R}^m[/math], there exists a vector [math]v[/math] in [math]\mathbb{R}^n[/math] such that [math]T(v)=w[/math].[br][br]As before, let [math]A=\left[\mathbf{a}_1 \ \mathbf{a}_2 \ \cdots \ \mathbf{a}_n\right][/math] be the matrix for a linear transformation [math]T:\mathbb{R}^n\to\mathbb{R}^m[/math]. Then the above definition is equivalent to saying that any vector [math]w[/math] in [math]\mathbb{R}^m[/math] is a linear combination of the column vectors of [math]A[/math]:[br][br][math]w=T(v)=T\left(\begin{pmatrix}v_1 \\ v_2 \\ \vdots \\ v_n\end{pmatrix}\right)=A\begin{pmatrix}v_1 \\ v_2 \\ \vdots \\ v_n\end{pmatrix}=v_1\mathbf{a}_1+v_2\mathbf{a}_2+\cdots+v_n\mathbf{a}_n[/math][br][br]In other words, [math]\text{Span}\left\{\mathbf{a}_1 \ \mathbf{a}_2 \ \cdots \ \mathbf{a}_n\right\}=\mathbb{R}^m[/math].[br][br]For a set of vectors in [math]\mathbb{R}^m[/math] to span the whole [math]\mathbb{R}^m[/math], there must be at least m vectors in the set. Therefore, if a linear transformation [math]T:\mathbb{R}^n\to\mathbb{R}^m[/math] is surjective, [math]m\leq n[/math].[br][br]
Suppose a linear transformation [math]T:\mathbb{R}^n\to\mathbb{R}^m[/math] is both injective and surjective, we say that it is [b]bijective[/b]. [br][br]Let [math]A=\left[\mathbf{a}_1 \ \mathbf{a}_2 \ \cdots \ \mathbf{a}_n\right][/math] be the matrix for a linear transformation [math]T:\mathbb{R}^n\to\mathbb{R}^m[/math]. Then [math]T[/math] is bijective if and only if [math]\text{Span}\left\{\mathbf{a}_1 \ \mathbf{a}_2 \ \cdots \ \mathbf{a}_n\right\}=\mathbb{R}^m[/math] and [math]\left\{\mathbf{a}_1 \ \mathbf{a}_2 \ \cdots \ \mathbf{a}_n\right\}[/math] is linearly independent i.e. [math]\left\{\mathbf{a}_1 \ \mathbf{a}_2 \ \cdots \ \mathbf{a}_n\right\}[/math] is a basis for [math]\mathbb{R}^m[/math]. Since any basis for [math]\mathbb{R}^m[/math] must have exactly m vectors, [math]m=n[/math].