Assignment-daixieTM为您提供杜克大学Duke UniversityMATH 221D Linear Algebra and Applications线性代数和应用代写代考和辅导服务!
Instructions:
Systems of linear equations and elementary row operations, Euclidean n-space and subspaces, linear transformations and matrix representations, Gram-Schmidt orthogonalization process, determinants, eigenvectors and eigenvalues; applications. Introduction to proofs. A gateway to more advanced math courses. Not open to students who have taken Mathematics 216 or 218. Prerequisite: Mathematics 122, 112L or 122L.
If A is m × n and B is n× m, show that AB = 0 if and only if col B ⊆ null A.
To show that AB = 0 if and only if col B ⊆ null A, we need to prove two things:
- If AB = 0, then col B ⊆ null A.
- If col B ⊆ null A, then AB = 0.
Here’s the proof for each direction:
- If AB = 0, then col B ⊆ null A.
Assume that AB = 0. We want to show that col B ⊆ null A.
Let b be any column vector in col B. Then, there exists a column vector x in R^n such that Bx = b. Since B is n × m and x is n × 1, b is an m × 1 column vector.
Now, consider ABx = A(Bx) = Ab. Since b is in col B, we have Ab = 0 (because AB = 0). Thus, ABx = 0 for all x in R^n.
This means that for any x in R^n, Ax is in null A. In other words, null A contains all linear combinations of the columns of A. But ABx = 0 for all x, so every column of B is a linear combination of the columns of A. Therefore, col B is a subset of null A.
- If col B ⊆ null A, then AB = 0.
Assume that col B ⊆ null A. We want to show that AB = 0.
Let A have columns a_1, a_2, …, a_n, and let B have columns b_1, b_2, …, b_m. Then, for any j = 1, 2, …, m, we have b_j in null A. This means that Ab_j = 0.
Now, let c be any column vector in R^n. We can write c as a linear combination of the columns of A: c = x_1a_1 + x_2a_2 + … + x_na_n. Then,
ABc = A(Bc) = A(B(x_1a_1 + x_2a_2 + … + x_na_n)) = A(x_1B a_1 + x_2B a_2 + … + x_nB a_n)
= x_1A(B a_1) + x_2A(B a_2) + … + x_nA(B a_n)
= x_1 0 + x_2 0 + … + x_n 0 (because B a_j is in null A for all j)
= 0
Thus, we have shown that ABc = 0 for all c in R^n, which means that AB = 0.
Therefore, we have shown both directions of the “if and only if” statement, and we have proven that AB = 0 if and only if col B ⊆ null A.
Let $V$ denote the set of all ordered pairs $(x, y)$ and define addition in $V$ as in $\mathbb{R}^2$. However, define a new scalar multiplication in $V$ by
$$
a(x, y)=(a y, a x)
$$
Determine if $V$ is a vector space with these operations.
Solution. Axioms A1 to A5 are valid for $V$ because they hold for matrices. Also $a(x, y)=(a y, a x)$ is again in $V$, so axiom $\mathrm{S} 1$ holds. To verify axiom $\mathrm{S} 2$, let $\mathbf{v}=(x, y)$ and $\mathbf{w}=\left(x_1, y_1\right)$ be typical elements in $V$ and compute
$$
\begin{aligned}
a(\mathbf{v}+\mathbf{w}) & =a\left(x+x_1, y+y_1\right)=\left(a\left(y+y_1\right), a\left(x+x_1\right)\right) \
a \mathbf{v}+a \mathbf{w} & =(a y, a x)+\left(a y_1, a x_1\right)=\left(a y+a y_1, a x+a x_1\right)
\end{aligned}
$$
Because these are equal, axiom S2 holds. Similarly, the reader can verify that axiom $\mathrm{S} 3$ holds. However, axiom $\mathrm{S} 4$ fails because
$$
a(b(x, y))=a(b y, b x)=(a b x, a b y)
$$
need not equal $a b(x, y)=(a b y, a b x)$. Hence, $V$ is not a vector space. (In fact, axiom S5 also fails.)
Let v denote a vector in a vector space V and let a denote a real number.
- 0v = 0.
- a0 = 0.
- If av = 0, then either a = 0 or v = 0.
- (−1)v = −v.
- Observe that $0 \mathbf{v}+0 \mathbf{v}=(0+0) \mathbf{v}=0 \mathbf{v}=0 \mathbf{v}+\mathbf{0}$ where the first equality is by axiom $S 3$. It follows that $0 \mathbf{v}=\mathbf{0}$ by cancellation.
- The proof is similar to that of (1), and is left as Exercise 6.1.12(a).
- Assume that $a \mathbf{v}=\mathbf{0}$. If $a=0$, there is nothing to prove; if $a \neq 0$, we must show that $\mathbf{v}=\mathbf{0}$. But $a \neq 0$ means we can scalar-multiply the equation $a \mathbf{v}=\mathbf{0}$ by the scalar $\frac{1}{a}$. The result (using (2) and Axioms $\mathrm{S} 5$ and $\mathrm{S} 4$ ) is
$$
\mathbf{v}=1 \mathbf{v}=\left(\frac{1}{a} a\right) \mathbf{v}=\frac{1}{a}(a \mathbf{v})=\frac{1}{a} \mathbf{0}=\mathbf{0}
$$ - We have $-\mathbf{v}+\mathbf{v}=\mathbf{0}$ by axiom $\mathrm{A} 5$. On the other hand,
$$
(-1) \mathbf{v}+\mathbf{v}=(-1) \mathbf{v}+1 \mathbf{v}=(-1+1) \mathbf{v}=0 \mathbf{v}=\mathbf{0}
$$
using (1) and axioms $S 5$ and $S 3$. Hence $(-1) \mathbf{v}+\mathbf{v}=-\mathbf{v}+\mathbf{v}$ (because both are equal to $\mathbf{0}$ ), so $(-1) \mathbf{v}=-\mathbf{v}$ by cancellation.
Axiom A3 ensures that the sum $\mathbf{u}+(\mathbf{v}+\mathbf{w})=(\mathbf{u}+\mathbf{v})+\mathbf{w}$ is the same however it is formed, and we write it simply as $\mathbf{u}+\mathbf{v}+\mathbf{w}$. Similarly, there are different ways to form any sum $\mathbf{v}_1+\mathbf{v}_2+\cdots+\mathbf{v}_n$, and Axiom A3 guarantees that they are all equal. Moreover, Axiom A2 shows that the order in which the vectors are written does not matter (for example: $\mathbf{u}+\mathbf{v}+\mathbf{w}+\mathbf{z}=\mathbf{z}+\mathbf{u}+\mathbf{w}+\mathbf{v}$ ).
Similarly, Axioms S2 and S3 extend. For example
$$
a(\mathbf{u}+\mathbf{v}+\mathbf{w})=a[\mathbf{u}+(\mathbf{v}+\mathbf{w})]=a \mathbf{u}+a(\mathbf{v}+\mathbf{w})=a \mathbf{u}+a \mathbf{v}+a \mathbf{w}
$$
for all $a, \mathbf{u}, \mathbf{v}$, and $\mathbf{w}$. Similarly $(a+b+c) \mathbf{v}=a \mathbf{v}+b \mathbf{v}+c \mathbf{v}$ hold for all values of $a, b, c$, and $\mathbf{v}$ (verify). More generally,
$$
\begin{aligned}
& a\left(\mathbf{v}_1+\mathbf{v}_2+\cdots+\mathbf{v}_n\right)=a \mathbf{v}_1+a \mathbf{v}_2+\cdots+a \mathbf{v}_n \
& \left(a_1+a_2+\cdots+a_n\right) \mathbf{v}=a_1 \mathbf{v}+a_2 \mathbf{v}+\cdots+a_n \mathbf{v}
\end{aligned}
$$
hold for all $n \geq 1$, all numbers $a, a_1, \ldots, a_n$, and all vectors, $\mathbf{v}, \mathbf{v}_1, \ldots, \mathbf{v}_n$. The verifications are by induction and are left to the reader. These facts-together with the axioms, Theorem 6.1.3, and the definition of subtraction-enable us to simplify expressions involving sums of scalar multiples of vectors by collecting like terms, expanding, and taking out common factors. This has been discussed for the vector space of matrices ; the manipulations in an arbitrary vector space are carried out in the same way. Here is an illustration.