线性代数和应用代写 Linear Algebra and Applications|MATH 221D Duke University Assignment

0

Assignment-daixieTM为您提供杜克大学Duke UniversityMATH 221D Linear Algebra and Applications线性代数和应用代写代考辅导服务!





Instructions:

Systems of linear equations and elementary row operations, Euclidean n-space and subspaces, linear transformations and matrix representations, Gram-Schmidt orthogonalization process, determinants, eigenvectors and eigenvalues; applications. Introduction to proofs. A gateway to more advanced math courses. Not open to students who have taken Mathematics 216 or 218. Prerequisite: Mathematics 122, 112L or 122L.

线性代数和应用代写 Linear Algebra and Applications|MATH 221D Duke University Assignment

问题 1.

If A is m × n and B is n× m, show that AB = 0 if and only if col B ⊆ null A.

证明 .

To show that AB = 0 if and only if col B ⊆ null A, we need to prove two things:

  1. If AB = 0, then col B ⊆ null A.
  2. If col B ⊆ null A, then AB = 0.

Here’s the proof for each direction:

  1. If AB = 0, then col B ⊆ null A.

Assume that AB = 0. We want to show that col B ⊆ null A.

Let b be any column vector in col B. Then, there exists a column vector x in R^n such that Bx = b. Since B is n × m and x is n × 1, b is an m × 1 column vector.

Now, consider ABx = A(Bx) = Ab. Since b is in col B, we have Ab = 0 (because AB = 0). Thus, ABx = 0 for all x in R^n.

This means that for any x in R^n, Ax is in null A. In other words, null A contains all linear combinations of the columns of A. But ABx = 0 for all x, so every column of B is a linear combination of the columns of A. Therefore, col B is a subset of null A.

  1. If col B ⊆ null A, then AB = 0.

Assume that col B ⊆ null A. We want to show that AB = 0.

Let A have columns a_1, a_2, …, a_n, and let B have columns b_1, b_2, …, b_m. Then, for any j = 1, 2, …, m, we have b_j in null A. This means that Ab_j = 0.

Now, let c be any column vector in R^n. We can write c as a linear combination of the columns of A: c = x_1a_1 + x_2a_2 + … + x_na_n. Then,

ABc = A(Bc) = A(B(x_1a_1 + x_2a_2 + … + x_na_n)) = A(x_1B a_1 + x_2B a_2 + … + x_nB a_n)

= x_1A(B a_1) + x_2A(B a_2) + … + x_nA(B a_n)

= x_1 0 + x_2 0 + … + x_n 0 (because B a_j is in null A for all j)

= 0

Thus, we have shown that ABc = 0 for all c in R^n, which means that AB = 0.

Therefore, we have shown both directions of the “if and only if” statement, and we have proven that AB = 0 if and only if col B ⊆ null A.

问题 2.

Let $V$ denote the set of all ordered pairs $(x, y)$ and define addition in $V$ as in $\mathbb{R}^2$. However, define a new scalar multiplication in $V$ by
$$
a(x, y)=(a y, a x)
$$
Determine if $V$ is a vector space with these operations.

证明 .

Solution. Axioms A1 to A5 are valid for $V$ because they hold for matrices. Also $a(x, y)=(a y, a x)$ is again in $V$, so axiom $\mathrm{S} 1$ holds. To verify axiom $\mathrm{S} 2$, let $\mathbf{v}=(x, y)$ and $\mathbf{w}=\left(x_1, y_1\right)$ be typical elements in $V$ and compute
$$
\begin{aligned}
a(\mathbf{v}+\mathbf{w}) & =a\left(x+x_1, y+y_1\right)=\left(a\left(y+y_1\right), a\left(x+x_1\right)\right) \
a \mathbf{v}+a \mathbf{w} & =(a y, a x)+\left(a y_1, a x_1\right)=\left(a y+a y_1, a x+a x_1\right)
\end{aligned}
$$
Because these are equal, axiom S2 holds. Similarly, the reader can verify that axiom $\mathrm{S} 3$ holds. However, axiom $\mathrm{S} 4$ fails because
$$
a(b(x, y))=a(b y, b x)=(a b x, a b y)
$$
need not equal $a b(x, y)=(a b y, a b x)$. Hence, $V$ is not a vector space. (In fact, axiom S5 also fails.)

问题 3.

Let v denote a vector in a vector space V and let a denote a real number.

  1. 0v = 0.
  2. a0 = 0.
  3. If av = 0, then either a = 0 or v = 0.
  4. (−1)v = −v.

证明 .

  1. Observe that $0 \mathbf{v}+0 \mathbf{v}=(0+0) \mathbf{v}=0 \mathbf{v}=0 \mathbf{v}+\mathbf{0}$ where the first equality is by axiom $S 3$. It follows that $0 \mathbf{v}=\mathbf{0}$ by cancellation.
  2. The proof is similar to that of (1), and is left as Exercise 6.1.12(a).
  3. Assume that $a \mathbf{v}=\mathbf{0}$. If $a=0$, there is nothing to prove; if $a \neq 0$, we must show that $\mathbf{v}=\mathbf{0}$. But $a \neq 0$ means we can scalar-multiply the equation $a \mathbf{v}=\mathbf{0}$ by the scalar $\frac{1}{a}$. The result (using (2) and Axioms $\mathrm{S} 5$ and $\mathrm{S} 4$ ) is
    $$
    \mathbf{v}=1 \mathbf{v}=\left(\frac{1}{a} a\right) \mathbf{v}=\frac{1}{a}(a \mathbf{v})=\frac{1}{a} \mathbf{0}=\mathbf{0}
    $$
  4. We have $-\mathbf{v}+\mathbf{v}=\mathbf{0}$ by axiom $\mathrm{A} 5$. On the other hand,
    $$
    (-1) \mathbf{v}+\mathbf{v}=(-1) \mathbf{v}+1 \mathbf{v}=(-1+1) \mathbf{v}=0 \mathbf{v}=\mathbf{0}
    $$
    using (1) and axioms $S 5$ and $S 3$. Hence $(-1) \mathbf{v}+\mathbf{v}=-\mathbf{v}+\mathbf{v}$ (because both are equal to $\mathbf{0}$ ), so $(-1) \mathbf{v}=-\mathbf{v}$ by cancellation.

Axiom A3 ensures that the sum $\mathbf{u}+(\mathbf{v}+\mathbf{w})=(\mathbf{u}+\mathbf{v})+\mathbf{w}$ is the same however it is formed, and we write it simply as $\mathbf{u}+\mathbf{v}+\mathbf{w}$. Similarly, there are different ways to form any sum $\mathbf{v}_1+\mathbf{v}_2+\cdots+\mathbf{v}_n$, and Axiom A3 guarantees that they are all equal. Moreover, Axiom A2 shows that the order in which the vectors are written does not matter (for example: $\mathbf{u}+\mathbf{v}+\mathbf{w}+\mathbf{z}=\mathbf{z}+\mathbf{u}+\mathbf{w}+\mathbf{v}$ ).
Similarly, Axioms S2 and S3 extend. For example
$$
a(\mathbf{u}+\mathbf{v}+\mathbf{w})=a[\mathbf{u}+(\mathbf{v}+\mathbf{w})]=a \mathbf{u}+a(\mathbf{v}+\mathbf{w})=a \mathbf{u}+a \mathbf{v}+a \mathbf{w}
$$
for all $a, \mathbf{u}, \mathbf{v}$, and $\mathbf{w}$. Similarly $(a+b+c) \mathbf{v}=a \mathbf{v}+b \mathbf{v}+c \mathbf{v}$ hold for all values of $a, b, c$, and $\mathbf{v}$ (verify). More generally,
$$
\begin{aligned}
& a\left(\mathbf{v}_1+\mathbf{v}_2+\cdots+\mathbf{v}_n\right)=a \mathbf{v}_1+a \mathbf{v}_2+\cdots+a \mathbf{v}_n \
& \left(a_1+a_2+\cdots+a_n\right) \mathbf{v}=a_1 \mathbf{v}+a_2 \mathbf{v}+\cdots+a_n \mathbf{v}
\end{aligned}
$$
hold for all $n \geq 1$, all numbers $a, a_1, \ldots, a_n$, and all vectors, $\mathbf{v}, \mathbf{v}_1, \ldots, \mathbf{v}_n$. The verifications are by induction and are left to the reader. These facts-together with the axioms, Theorem 6.1.3, and the definition of subtraction-enable us to simplify expressions involving sums of scalar multiples of vectors by collecting like terms, expanding, and taking out common factors. This has been discussed for the vector space of matrices ; the manipulations in an arbitrary vector space are carried out in the same way. Here is an illustration.

这是一份2023年的杜克大学Duke University MATH 221D线性代数和应用代写的成功案例




















线性代数和应用分代写 Linear Algebra and Applications|MATH 221 Duke University Assignment

0

Assignment-daixieTM为您提供杜克大学Duke UniversityMATH 221 Linear Algebra and Applications线性代数和应用代写代考辅导服务!





Instructions:

Fundamental notions of vector space theory, linear independence, basis, span, scalar product, orthogonal bases. Includes a survey of matrix algebra, solution of systems linear equations, rank, kernel, eigenvalues and eigenvectors, the LU- and QR-factorizations, and least squares approximation. Selected applications in mathematics, science, engineering and business. Prereq: MATH 426. (Not offered for credit if credit is received for MATH 545 or MATH 762.)

线性代数和应用分代写 Linear Algebra and Applications|MATH 221 Duke University Assignment

问题 1.

Given columns $\mathbf{a}_1, \mathbf{a}_2, \mathbf{a}_3$, and $\mathbf{a}_4$ in $\mathbb{R}^3$, write $2 \mathbf{a}_1-3 \mathbf{a}_2+5 \mathbf{a}_3+\mathbf{a}_4$ in the form $A \mathbf{x}$ where $A$ is a matrix and $\mathbf{x}$ is a vector.

证明 .

Solution. Here the column of coefficients is $\mathbf{x}=\left[\begin{array}{r}2 \ -3 \ 5 \ 1\end{array}\right]$. Hence Definition $2.5$ gives
$$
A \mathbf{x}=2 \mathbf{a}_1-3 \mathbf{a}_2+5 \mathbf{a}_3+\mathbf{a}_4
$$
where $A=\left[\begin{array}{llll}\mathbf{a}_1 & \mathbf{a}_2 & \mathbf{a}_3 & \mathbf{a}_4\end{array}\right]$ is the matrix with $\mathbf{a}_1, \mathbf{a}_2, \mathbf{a}_3$, and $\mathbf{a}_4$ as its columns.

问题 2.

If $A$ is any matrix, then $I A=A$ and $A I=A$, and where $I$ denotes an identity matrix of a size so that the multiplications are defined.

证明 .

Solution. These both follow from the dot product rule as the reader should verify. For a more formal proof, write $A=\left[\begin{array}{llll}\mathbf{a}_1 & \mathbf{a}_2 & \cdots & \mathbf{a}_n\end{array}\right]$ where $\mathbf{a}_j$ is column $j$ of $A$. Then Definition $2.9$ and give
$$
I A=\left[\begin{array}{llll}
I \mathbf{a}_1 & I \mathbf{a}_2 & \cdots & I \mathbf{a}_n
\end{array}\right]=\left[\begin{array}{llll}
\mathbf{a}_1 & \mathbf{a}_2 & \cdots & \mathbf{a}_n
\end{array}\right]=A
$$
If $\mathbf{e}_j$ denotes column $j$ of $I$, then $A \mathbf{e}_j=\mathbf{a}_j$ for each $j$ by Example Hence gives:
$$
A I=A\left[\begin{array}{llll}
\mathbf{e}_1 & \mathbf{e}_2 & \cdots & \mathbf{e}_n
\end{array}\right]=\left[\begin{array}{llll}
A \mathbf{e}_1 & A \mathbf{e}_2 & \cdots & A \mathbf{e}_n
\end{array}\right]=\left[\begin{array}{llll}
\mathbf{a}_1 & \mathbf{a}_2 & \cdots & \mathbf{a}_n
\end{array}\right]=A
$$

问题 3.

If $A$ is an invertible matrix, show that the transpose $A^T$ is also invertible. Show further that the inverse of $A^T$ is just the transpose of $A^{-1}$; in symbols, $\left(A^T\right)^{-1}=\left(A^{-1}\right)^T$.

证明 .

Solution. $A^{-1}$ exists (by assumption). Its transpose $\left(A^{-1}\right)^T$ is the candidate proposed for the inverse of $A^T$. Using the inverse criterion, we test it as follows:
$$
\begin{aligned}
A^T\left(A^{-1}\right)^T & =\left(A^{-1} A\right)^T=I^T=I \
\left(A^{-1}\right)^T A^T & =\left(A A^{-1}\right)^T=I^T=I
\end{aligned}
$$
Hence $\left(A^{-1}\right)^T$ is indeed the inverse of $A^T$; that is, $\left(A^T\right)^{-1}=\left(A^{-1}\right)^T$.

这是一份2023年的杜克大学Duke University MATH 221线性代数和应用代写的成功案例