# 线性代数代写|LINEAR ALGEBRA MATHS2004 University of Glasgow Assignment

0

Assignment-daixieTM为您提供格拉斯哥大学University of Glasgow LINEAR ALGEBRA MATHS2004线性代数代写代考辅导服务！

## Instructions:

Linear algebra is a branch of mathematics that deals with vector spaces and linear equations, and it has many practical applications in fields such as physics, computer science, economics, and more.

Throughout this course, you can expect to learn about the basics of linear algebra, such as vectors, matrices, and systems of linear equations. You will also learn about more advanced topics, such as eigenvalues and eigenvectors, linear transformations, and inner products. By the end of the course, you should be able to apply these concepts to solve a variety of problems in different areas of science and engineering.

It’s great that this course is emphasizing methods and applications, as it will help you develop the skills you need to use linear algebra in real-world situations. This course will be particularly important for students who plan to pursue honors-level studies in these fields, as it will provide them with a strong foundation in linear algebra that they can build upon in future courses.

(b) Find the coefficients $C$ and $D$ of the best curve $y=C+D 2^t$.

$$\begin{gathered} A^T A=\left[\begin{array}{lll} 1 & 1 & 1 \ 1 & 2 & 4 \end{array}\right]\left[\begin{array}{ll} 1 & 1 \ 1 & 2 \ 1 & 4 \end{array}\right]=\left[\begin{array}{lc} 3 & 7 \ 7 & 21 \end{array}\right] \ A^T b=\left[\begin{array}{lll} 1 & 1 & 1 \ 1 & 2 & 4 \end{array}\right]\left[\begin{array}{l} 6 \ 4 \ 0 \end{array}\right]=\left[\begin{array}{l} 10 \ 14 \end{array}\right] \end{gathered}$$
Solve $A^T A \hat{x}=A^T b$ :
$$\left[\begin{array}{cc} 3 & 7 \ 7 & 21 \end{array}\right]\left[\begin{array}{l} C \ D \end{array}\right]=\left[\begin{array}{l} 10 \ 14 \end{array}\right] \text { gives }\left[\begin{array}{l} C \ D \end{array}\right]=\frac{1}{14}\left[\begin{array}{rr} 21 & -7 \ -7 & 3 \end{array}\right]\left[\begin{array}{l} 10 \ 14 \end{array}\right]=\left[\begin{array}{r} 8 \ -2 \end{array}\right]$$

(a) Suppose $x_k$ is the fraction of MIT students who prefer calculus to linear algebra at year $k$. The remaining fraction $y_k=1-x_k$ prefers linear algebra.

At year $k+1,1 / 5$ of those who prefer calculus change their mind (possibly after taking 18.03). Also at year $k+1,1 / 10$ of those who prefer linear algebra change their mind (possibly because of this exam).

Create the matrix $A$ to give $\left[\begin{array}{l}x_{k+1} \ y_{k+1}\end{array}\right]=A\left[\begin{array}{l}x_k \ y_k\end{array}\right]$ and find the limit of $A^k\left[\begin{array}{l}1 \ 0\end{array}\right]$ as $k \rightarrow \infty$.

$$A=\left[\begin{array}{ll} .8 & .1 \ .2 & .9 \end{array}\right] .$$
The eigenvector with $\lambda=1$ is $\left[\begin{array}{l}1 / 3 \ 2 / 3\end{array}\right]$.
This is the steady state starting from $\left[\begin{array}{l}1 \ 0\end{array}\right]$.
$\frac{2}{3}$ of all students prefer linear algebra! I agree.

Solve these differential equations, starting from $x(0)=1, \quad y(0)=0$ :
$$\frac{d x}{d t}=3 x-4 y \quad \frac{d y}{d t}=2 x-3 y$$

$$A=\left[\begin{array}{ll} 3 & -4 \ 2 & -3 \end{array}\right]$$
has eigenvalues $\lambda_1=1$ and $\lambda_2=-1$ with eigenvectors $x_1=(2,1)$ and $x_2=(1,1)$. The initial vector $(x(0), y(0))=(1,0)$ is $x_1-x_2$.
So the solution is $(x(t), y(t))=e^t(2,1)+e^{-t}(1,1)$.

# 线性代数代写|Linear Algebra I MA1008 Cardiff University Assignment

0

Assignment-daixieTM为您提供卡迪夫大学Cardiff University MA1008 Linear Algebra I 线性代数代写代考辅导服务！

## Instructions:

Linear algebra is a branch of mathematics that deals with the study of vector spaces, linear transformations, and matrices. It is widely used in various fields of science and engineering, including physics, computer science, and economics.

The concept of a vector space is fundamental to linear algebra. A vector space is a collection of vectors that satisfy certain axioms, such as closure under addition and scalar multiplication. Examples of vector spaces include the set of real numbers, the set of polynomials, and the set of functions.

Subspaces are subsets of vector spaces that are also vector spaces in their own right. They are important because they allow us to focus on smaller, more manageable subsets of a larger vector space.

Linear transformations are functions that preserve the structure of a vector space. They are also known as linear maps or linear operators. Examples of linear transformations include rotations, reflections, and scaling.

Linear combinations are a way of combining vectors using scalar multiplication and addition. Spanning sets are sets of vectors that can be used to generate all the vectors in a vector space through linear combinations. Linearly independent sets are sets of vectors that cannot be generated through linear combinations of other vectors in the set.

Dimensionality is a measure of the size of a vector space. It is defined as the number of vectors in a basis of the vector space. A basis is a set of linearly independent vectors that span the vector space.

The final result that every finite-dimensional vector space can be identified with a set of finite-tuples of scalars is known as the coordinate representation theorem. It states that any vector in a finite-dimensional vector space can be uniquely represented by a finite sequence of scalars, which can be thought of as coordinates.

Overall, linear algebra is a powerful tool that enables us to study the properties of vectors and linear transformations in a systematic and rigorous way. It has many applications in various areas of mathematics and beyond.

Suppose $A v_i=b_i$ for the vectors $v_1, \ldots, v_n$ and $b_1, \ldots, b_n$ in $R^n$. Put the $v$ ‘s into the columns of $V$ and put the $b$ ‘s into the columns of $B$.
(a) Write those equations $A v_i=b_i$ in matrix form. What condition on which vectors allows $A$ to be determined uniquely? Assuming this condition, find $A$ from $V$ and $B$.

$A\left[v_1 \cdots v_n\right]=\left[b_1 \cdots b_n\right]$ or $A V=B$. Then $A=B V^{-1}$ if the $v^{\prime} s$ are independent.

(b) Describe the column space of that matrix $A$ in terms of the given vectors.

The column space of $A$ consists of all linear combinations of $b_1, \cdots, b_n$.

(c) What additional condition on which vectors makes $A$ an invertible matrix? Assuming this, find $A^{-1}$ from $V$ and $B$.

If the $b^{\prime} s$ are independent, then $B$ is invertible and $A^{-1}=V B^{-1}$.

# 线性代数代写Linear Algebra|MATH1703 University of Plymouth Assignment

0

Assignment-daixieTM为您提供普利茅斯大学University of Plymouth MATH 1703 Linear Algebra线性代数代写代考辅导服务！

## Instructions:

Vectors and matrices are indeed fundamental concepts in mathematics and have wide-ranging applications in various fields, including statistics, physics, data science, and engineering. A vector is a quantity that has both magnitude and direction, while a matrix is a rectangular array of numbers or symbols arranged in rows and columns.

In linear algebra, the study of vectors and matrices is critical. Linear algebra deals with the algebraic properties of linear equations, linear mappings, and their representations in vector spaces and through matrices. It provides a powerful framework for modeling, analyzing, and solving problems that arise in many fields, including physics, engineering, economics, and computer science.

Vector spaces are mathematical structures that abstract the essential properties of vectors. They are defined as sets of vectors that satisfy certain axioms, including closure under addition and scalar multiplication. Linear transformations are mappings between vector spaces that preserve the structure of the vector space, and they are represented by matrices.

Analytic geometry is the branch of mathematics that deals with the study of geometry using algebraic techniques. The connection between vectors, matrices, and analytic geometry is fundamental, as vectors can be used to represent points and directions in space, and matrices can be used to transform and project them onto different coordinate systems.

In summary, the practical skills of handling vectors and matrices are essential in many applications, and their deep connections with linear spaces and analytic geometry make them a powerful tool in the study of mathematics and its applications.

A group of matrices includes $A B$ and $A^{-1}$ if it includes $A$ and $B$. “Products and inverses stay in the group.” Which of these sets are groups?
Lower triangular matrices $L$ with 1 ‘s on the diagonal, symmetric matrices $S$, positive matrices $M$, diagonal invertible matrices $D$, permutation matrices $P$, matrices with $Q^{\mathrm{T}}=Q^{-1}$. Invent two more matrix groups.

Yes, the lower triangular matrices $L$ with 1’s on the diagonal form a group. Clearly, the product of two is a third. Further, the Gauss-Jordan method shows that the inverse of one is another.

No, the symmetric matrices do not form a group. For example, here are two symmetric matrices $A$ and $B$ whose product $A B$ is not symmetric.
$$A=\left[\begin{array}{lll} 0 & 1 & 0 \ 1 & 0 & 0 \ 0 & 0 & 1 \end{array}\right], \quad B=\left[\begin{array}{lll} 1 & 2 & 3 \ 2 & 4 & 5 \ 3 & 5 & 6 \end{array}\right], \quad A B=\left[\begin{array}{lll} 2 & 4 & 5 \ 1 & 2 & 3 \ 3 & 5 & 6 \end{array}\right]$$
No, the positive matrices do not form a group. For example, $\left(\begin{array}{ll}1 & 1 \ 0 & 1\end{array}\right)$ is positive, but its inverse $\left(\begin{array}{rr}1 & -1 \ 0 & 1\end{array}\right)$ is not.
Yes, clearly, the diagonal invertible matrices form a group.
Yes, clearly, the permutation matrices matrices form a group.
Yes, the matrices with $Q^{\mathrm{T}}=Q^{-1}$ form a group. Indeed, if $A$ and $B$ are two such matrices, then so are $A B$ and $A^{-1}$, as
$$(A B)^{\mathrm{T}}=B^{\mathrm{T}} A^{\mathrm{T}}=B^{-1} A^{-1}=(A B)^{-1} \quad \text { and }\left(A^{-1}\right)^{\mathrm{T}}=\left(A^{\mathrm{T}}\right)^{-1}=A^{-1} .$$
There are many more matrix groups. For example, given two, the block matrices $\left(\begin{array}{cc}A & 0 \ 0 & B\end{array}\right)$ form a third as $A$ ranges over the first group and $B$ ranges over the second. Another example is the set of all products $c P$ where $c$ is a nonzero scalar and $P$ is a permutation matrix of given size.

Suppose $\mathbf{S}$ and $\mathbf{T}$ are two subspaces of a vector space $\mathbf{V}$.
(a) Definition: The sum $\mathbf{S}+\mathbf{T}$ contains all sums $\mathbf{s}+\mathbf{t}$ of a vector $\mathbf{s}$ in $\mathbf{S}$ and a vector $\mathbf{t}$ in T. Show that $\mathbf{S}+\mathbf{T}$ satisfies the requirements (addition and scalar multiplication) for a vector space.
(b) If $\mathbf{S}$ and $\mathbf{T}$ are lines in $\mathbf{R}^m$, what is the difference between $\mathbf{S}+\mathbf{T}$ and $\mathbf{S} \cup \mathbf{T}$ ? That union contains all vectors from $\mathbf{S}$ and $\mathbf{T}$ or both. Explain this statement: The span of $\mathbf{S} \cup \mathbf{T}$ is $\mathbf{S}+\mathbf{T}$. (Section 3.5 returns to this word “span.”)

(a) Let $\mathbf{s}, \mathbf{s}^{\prime}$ be vectors in $\mathbf{S}$, let $\mathbf{t}, \mathbf{t}^{\prime}$ be vectors in $\mathbf{T}$, and let $c$ be a scalar. Then
$$(\mathbf{s}+\mathbf{t})+\left(\mathbf{s}^{\prime}+\mathbf{t}^{\prime}\right)=\left(\mathbf{s}+\mathbf{s}^{\prime}\right)+\left(\mathbf{t}+\mathbf{t}^{\prime}\right) \quad \text { and } \quad c(\mathbf{s}+\mathbf{t})=c \mathbf{s}+c \mathbf{t}$$
Thus $\mathbf{S}+\mathbf{T}$ is closed under addition and scalar multiplication; in other words, it satisfies the two requirements for a vector space.
(b) If $\mathbf{S}$ and $\mathbf{T}$ are distinct lines, then $\mathbf{S}+\mathbf{T}$ is a plane, whereas $\mathbf{S} \cup \mathbf{T}$ is not even closed under addition. The span of $\mathbf{S} \cup \mathbf{T}$ is the set of all combinations of vectors in this union. In particular, it contains all sums $\mathbf{s}+\mathbf{t}$ of a vector $\mathbf{s}$ in $\mathbf{S}$ and a vector $\mathbf{t}$ in $\mathbf{T}$, and these sums form $\mathbf{S}+\mathbf{T}$. On the other hand, $\mathbf{S}+\mathbf{T}$ contains both $\mathbf{S}$ and $\mathbf{T}$; so it contains $\mathbf{S} \cup \mathbf{T}$. Further, $\mathbf{S}+\mathbf{T}$ is a vector space. So it contains all combinations of vectors in itself; in particular, it contains the span of $\mathbf{S} \cup \mathbf{T}$. Thus the span of $\mathbf{S} \cup \mathbf{T}$ is $\mathbf{S}+\mathbf{T}$.

Section 3.1. Problem 32: Show that the matrices $A$ and $[A A B]$ (with extra columns) have the same column space. But find a square matrix with $\mathbf{C}\left(A^2\right)$ smaller than $\mathbf{C}(A)$. Important point:
An $n$ by $n$ matrix has $\mathbf{C}(A)=\mathbf{R}^n$ exactly when $A$ is an____ matrix.

Each column of $A B$ is a combination of the columns of $A$ (the combining coefficients are the entries in the corresponding column of $B$ ). So any combination of the columns of $[A A B]$ is a combination of the columns of $A$ alone. Thus $A$ and $[A A B]$ have the same column space.
Let $A=\left(\begin{array}{ll}0 & 1 \ 0 & 0\end{array}\right)$. Then $A^2=0$, so $\mathbf{C}\left(A^2\right)=\mathbf{Z}$. But $\mathbf{C}(A)$ is the line through $\left(\begin{array}{l}1 \ 0\end{array}\right)$.
An $n$ by $n$ matrix has $\mathbf{C}(A)=\mathbf{R}^n$ exactly when $A$ is an invertible matrix, because $A x=b$ is solvable for any given $b$ exactly when $A$ is invertible.

# 线性代数代写  Linear Algebra, Multivariable Calculus, and Modern Applications|MATH 51ACE Stanford University Assignment

0

Assignment-daixieTM为您提供斯坦福大学Stanford University MATH 51ACE Linear Algebra, Multivariable Calculus, and Modern Applications线性代数代写代考辅导服务！

## Instructions:

Multivariable calculus is a branch of mathematics that deals with functions of several variables, also known as vector calculus. It extends the concepts of differential calculus and integral calculus to higher dimensions. In multivariable calculus, we study functions of two or more independent variables and their derivatives, integrals, and applications.

The key concepts in multivariable calculus include partial derivatives, gradients, directional derivatives, the chain rule, double and triple integrals, line integrals, surface integrals, and the divergence theorem and Stokes’ theorem.

Multivariable calculus has a wide range of applications in science, engineering, economics, and other fields. It is used to model physical phenomena, such as fluid flow and electromagnetism, to optimize systems, such as in finance and management, and to understand complex data, such as in machine learning and data science.

Prove or disprove: A system of linear equations is homogeneous if and only if the
system has the zero vector as a solution.

This is a true statement. A proof is:
( $\Rightarrow$ ) Suppose we have a homogeneous system $\mathcal{C S}(A, 0)$. Then by substituting the scalar zero for each variable, we arrive at true statements for each equation. So the zero vector is a solution. This is the content of Theorem HSC.
( $\Leftarrow$ ) Suppose now that we have a generic (i.e. not necessarily homogeneous) system of equations, $\mathcal{L} \mathcal{S}(A, \mathbf{b})$ that has the zero vector as a solution. Upon substituting this solution into the system, we discover that each component of $\mathbf{b}$ must also be zero. So $\mathbf{b}=\mathbf{0}$.

Find $\alpha$ and $\beta$ that solve the vector equation.
$$\alpha\left[\begin{array}{l} 2 \ 1 \end{array}\right]+\beta\left[\begin{array}{l} 1 \ 3 \end{array}\right]=\left[\begin{array}{l} 5 \ 0 \end{array}\right]$$

Performing the indicated operations (Definition CVA, Definition CVSM), we obtain the vector equations
$$\left[\begin{array}{l} 5 \ 0 \end{array}\right]=\alpha\left[\begin{array}{l} 2 \ 1 \end{array}\right]+\beta\left[\begin{array}{l} 1 \ 3 \end{array}\right]=\left[\begin{array}{l} 2 \alpha+\beta \ \alpha+3 \beta \end{array}\right]$$
Since the entries of the vectors must be equal by Definition CVE, we obtain the system of equations
\begin{aligned} & 2 \alpha+\beta=5 \ & \alpha+3 \beta=0 \end{aligned}
which we can solve by row-reducing the augmented matrix of the system,
$$\left[\begin{array}{ccc} 2 & 1 & 5 \ 1 & 3 & 0 \end{array}\right] \stackrel{\text { RREF }}{\longrightarrow}\left[\begin{array}{ccc} 1 & 0 & 3 \ 0 & 1 & -1 \end{array}\right]$$
Thus, the only solution is $\alpha=3, \beta=-1$.

Suppose that $S=\left{\left[\begin{array}{c}-1 \ 2 \ 1\end{array}\right],\left[\begin{array}{l}3 \ 1 \ 2\end{array}\right],\left[\begin{array}{l}1 \ 5 \ 4\end{array}\right],\left[\begin{array}{c}-6 \ 5 \ 1\end{array}\right]\right}$. Let $W=\langle S\rangle$ and let $\mathbf{y}=\left[\begin{array}{c}-5 \ 3 \ 0\end{array}\right]$. Is $\mathbf{y} \in W$ ? If so, provide an explicit linear combination that demonstrates this.

Form a linear combination, with unknown scalars, of $S$ that equals $\mathbf{y}$,
$$a_1\left[\begin{array}{c} -1 \ 2 \ 1 \end{array}\right]+a_2\left[\begin{array}{l} 3 \ 1 \ 2 \end{array}\right]+a_3\left[\begin{array}{l} 1 \ 5 \ 4 \end{array}\right]+a_4\left[\begin{array}{c} -6 \ 5 \ 1 \end{array}\right]=\left[\begin{array}{c} -5 \ 3 \ 0 \end{array}\right]$$
We want to know if there are values for the scalars that make the vector equation true since that is the definition of membership in $\langle S\rangle$. By Theorem SLSLC any such values will also be solutions to the linear system represented by the augmented matrix,
$$\left[\begin{array}{ccccc} -1 & 3 & 1 & -6 & -5 \ 2 & 1 & 5 & 5 & 3 \ 1 & 2 & 4 & 1 & 0 \end{array}\right]$$
Row-reducing the matrix yields,
$$\left[\begin{array}{lcccc} {[1} & 0 & 2 & 3 & 2 \ 0 & 1 & 1 & -1 & -1 \ 0 & 0 & 0 & 0 & 0 \end{array}\right]$$
From this we see that the system of equations is consistent (Theorem RCLS), and has a infinitely many solutions. Any solution will provide a linear combination of the vectors in $R$ that equals $\mathbf{y}$. So $\mathbf{y} \in S$, for example,
$$(-10)\left[\begin{array}{c} -1 \ 2 \ 1 \end{array}\right]+(-2)\left[\begin{array}{l} 3 \ 1 \ 2 \end{array}\right]+(3)\left[\begin{array}{l} 1 \ 5 \ 4 \end{array}\right]+(2)\left[\begin{array}{c} -6 \ 5 \ 1 \end{array}\right]=\left[\begin{array}{c} -5 \ 3 \ 0 \end{array}\right]$$

# 线性代数代写Linear Algebra|MATH 19620 University of Chicago Assignment

0

Assignment-daixieTM为您提供芝加哥大学University of ChicagoMATH 19620 Linear Algebra线性代数代写代考辅导服务！

## Instructions:

The Department of Mathematics offers a range of academic programs and courses to students interested in mathematics and applied mathematics at both undergraduate and graduate levels. At the undergraduate level, students can choose from a Bachelor of Arts (BA) and a Bachelor of Science (BS) degree in mathematics. The BS degree also offers specializations in applied mathematics and mathematics with a specialization in economics. Students in other fields of study may also choose to complete a minor in mathematics.

The BA in mathematics program is designed to provide students with a broad understanding of mathematics and its applications. It includes courses in calculus, algebra, geometry, and other mathematical areas. The program emphasizes critical thinking and problem-solving skills, which are essential for success in various fields such as finance, computer science, and engineering.

The BS degree in mathematics offers students an opportunity to delve deeper into the field. The program covers a broad range of topics, including calculus, differential equations, abstract algebra, and mathematical analysis. The applied mathematics specialization within the BS degree is designed for students who wish to apply mathematical concepts to real-world problems, such as in physics, engineering, and biology. The mathematics specialization in economics is designed for students who want to apply mathematical techniques to economic analysis.

Let $m$ and $n$ be positive integers with no common factor. Prove that if $\sqrt{m / n}$ is rational, then $m$ and $n$ are both perfect squares, that is to say there exist integers $p$ and $q$ such that $m=p^2$ and $n=q^2$. (This is proved in Proposition 9 of Book X of Euclid’s Elements).

Let $m$ and $n$ be positive integers with no common factor. Prove that if $\sqrt{m / n}$ is rational, then $m$ and $n$ are both perfect squares, that is to say there exist integers $p$ and $q$ such that $m=p^2$ and $n=q^2$. (This is proved in Proposition 9 of Book $\mathrm{X}$ of Euclids Elements).

Assume $\sqrt{m / n}$ is rational. Then there exist positive integers $M$ and $N$ with no common factor such that $\sqrt{m / n}=M / N$ and so $m N^2=n M^2$.
Claim: $M^2$ divides $m$ and $N^2$ divides $n$.
Assume the claim for now. Then
$$m=M^2 m^{\prime} \text { and } n=N^2 n^{\prime} \text { for some } m^{\prime} \text { and } n^{\prime} \text {. }$$
Substituting we obtain $M^2 m^{\prime} N^2=N^2 n^{\prime} M^2$ which gives $m^{\prime}=n^{\prime} . m^{\prime}=n^{\prime}$ divides $m$ and $n$ so $m^{\prime}=n^{\prime}=1$ and we have shown $m$ and $n$ are perfect squares.

Proof of claim: We show that $M^2$ divides $m$; the argument that $N^2$ divides $n$ is identical. Write $M$ as a product of primes $p_1 \cdots p_r$ and note that no $p_i$ divides $N$. Assume inductively that $p_1^2 \cdots p_t^2$ divides $m$. Then
$$p_{t+1}^2\left|\frac{M^2}{p_1^2 \cdots p_t^2}\right| \frac{m}{p_1^2 \cdots p_t^2} N^2$$
Since $p_{t+1}$ does not divide $N^2$ we see
$$p_{t+1}^2 \mid \frac{m}{p_1^2 \cdots p_t^2}, \text { which gives } p_1^2 \cdots p_{t+1}^2 \mid m \text {. }$$
The inductive hypothesis holds when $t=0$; the empty product is 1 . Thus, by induction $p_1^2 \cdots p_r^2=M^2$ divides $m$.

Consider the subset of $\mathbb{R}$ defined by
$$\mathbb{Q}(\sqrt{2})={a+\sqrt{2} b: a, b \in \mathbb{Q}}$$
with the usual addition and multiplication. Show that this is a field (you may use all properties of the real numbers).

We define
$$F=\mathbb{Q}={a+\sqrt{2} b \mid a, b \in \mathbb{Q}} \subset \mathbb{R}$$
We wish to show that $F$ is a subfield of $\mathbb{R}$. In order to show this, we need to show that a) $0,1 \in F$; b) $F$ is closed under addition and multiplication; and c) if $x \in F$ and $x \neq 0$, then $-x \in F$ and $1 / x \in F$. The commutative, associate, and distributive properties all follow from the corresponding properties on $\mathbb{R}$.
a), b), and the first half of c) are straightforward; we have $0=0+0 \sqrt{2} \in F$ and $1=1+0 \sqrt{2} \in F$. For b), we have
$$(a+b \sqrt{2})+(c+d \sqrt{2})=(a+c)+(b+d) \sqrt{2} \in F$$
and
$$(a+b \sqrt{2})(c+d \sqrt{2})=(a b+2 c d)+(a d+b c) \sqrt{2} \in F$$
If $x=a+b \sqrt{2}$, then $-x=(-a)+(-b) \sqrt{2} \in F$. So the only fact remaining to show is that $F$ is closed under multiplicative inverses.
To prove this, we need the following
Fact: if $0=a+b \sqrt{2} \in F$, then $a=b=0$

Proof: Suppose $b \neq 0$. Then $\sqrt{2}=-a / b \in \mathbb{Q}$, a contradiction. So we must have $b=0$, and then $0=a+0=a$.

Now take $x=a+b \sqrt{2} \in F, x \neq 0$. By the above fact, $a-b \sqrt{2}$ is also nonzero, and hence
$$a^2-2 b^2=(a+b \sqrt{2})(a-b \sqrt{2}) \neq 0$$
Since the product of non-zero real numbers is non-zero.
So we can define $c=a /\left(a^2-2 b^2\right) \in \mathbb{Q}, d=-b /\left(a^2-2 b^2\right) \in \mathbb{Q}$, and $y=c+d \sqrt{2} \in F$. I claim that $x y=1$, so $y=1 / x$ and $F$ contains multiplicative inverses. Indeed,
$$(a+b \sqrt{2})(c+d \sqrt{2})=\frac{1}{a^2-2 b^2}(a+b \sqrt{2})(a-b \sqrt{2})=\frac{a^2-2 b^2}{a^2-2 b^2}=1$$
and we are done.

Let $(X, d)$ be a metric space. Show that $d^{\prime}(x, y)=\sqrt{d(x, y)}$ is also a metric on $X$, and that the open sets for $d^{\prime}$ are the same as the open sets for $d$.

We have a metric space $(X, d)$, and define the function $d^{\prime}(x, y)=\sqrt{d(x, y)}$. We wish to show that $\left(X, d^{\prime}\right)$ is also a metric space with the same open sets as $(X, d)$. We first check that $d^{\prime}$ is a metric.
(a) If $x \neq y$, then $d^{\prime}(x, y)=\sqrt{d(x, y)}>0$ since $d(x, y)>0$, and similarly $d^{\prime}(x, x)=0$
(b) $d^{\prime}(x, y)=\sqrt{d(x, y)}=\sqrt{d(y, x)}=d^{\prime}(y, x)$
(c) For the triangle inequality, we first need the following elementary
Fact: If $a, b \geq 0$, then $\sqrt{a+b} \leq \sqrt{a}+\sqrt{b}$
Indeed, squaring the right hand side gives $a+b+2 \sqrt{a b} \geq a+b$, and the square root function is order preserving. Using this fact, for $x, y, z \in X$ we have
$$d^{\prime}(x, z)=\sqrt{d(x, z)} \leq \sqrt{d(x, y)+d(y, z)} \leq \sqrt{d(x, y)}+\sqrt{d(y, z)}=d^{\prime}(x, y)+d^{\prime}(y, z)$$
Now, let $E$ be an open set for $E$. We need to show that it is open for $d^{\prime}$. Let $x \in E$. Then there is some $r>0$ such that the ball of radius $r$ around $x$ is contained in $E$, where the ball is taken with respect to $d$, i.e. $N_r(x) \subset E$. But the ball of radius $r$ with respect to $d$ is the ball of radius $\sqrt{r}$ with respect to $d^{\prime}$, so there is a neighbourhood of $x$ with respect to $d^{\prime}$ contained in $E$. In other words, $E$ is open with respect to $d^{\prime}$. Similarly, a set that is open with respect to $d^{\prime}$ is also open with respect to $d$.

# 线性代数代写  Linear Algebra, Multivariable Calculus, and Modern Applications|MATH 51 Stanford University Assignment

0

Assignment-daixieTM为您提供斯坦福大学Stanford University MATH 51  Linear Algebra, Multivariable Calculus, and Modern Applications线性代数代写代考辅导服务！

## Instructions:

Multivariable calculus, also known as calculus of several variables, is a branch of calculus that deals with the study of functions of several variables. In contrast to single-variable calculus, where functions typically have only one independent variable, multivariable calculus involves functions that have more than one independent variable.

The fundamental concepts in multivariable calculus include partial derivatives, gradients, vector fields, line integrals, surface integrals, and volume integrals. These concepts are used to study a wide range of phenomena in mathematics, physics, engineering, and other scientific fields.

In multivariable calculus, one of the key ideas is that the behavior of a function of several variables can be analyzed by considering its behavior along various lines, curves, or surfaces. This leads to the concept of partial derivatives, which describe how a function changes when one of its variables is varied while holding all the other variables constant.

The gradient of a function is a vector that points in the direction of the steepest increase of the function and its magnitude gives the rate of increase in that direction. Vector fields, which assign a vector to each point in space, are used to model phenomena such as fluid flow, electric and magnetic fields, and gravitational fields.

Line integrals involve integrating a function along a curve, while surface integrals involve integrating a function over a surface. These concepts are used to calculate quantities such as work, flux, and circulation in a variety of physical systems.

Volume integrals involve integrating a function over a three-dimensional region and are used to calculate quantities such as mass, center of mass, and moments of inertia in physical systems.

Multivariable calculus is an essential tool in many areas of mathematics, physics, and engineering, and has applications in fields such as computer graphics, economics, and biology.

A parking lot has 66 vehicles (cars, trucks, motorcycles and bicycles) in it. There are four times as many cars as trucks. The total number of tires ( 4 per car or truck, 2 per motorcycle or bicycle) is 252 . How many cars are there? How many bicycles?

Let $c, t, m, b$ denote the number of cars, trucks, motorcycles, and bicycles. Then the statements from the problem yield the equations:
\begin{aligned} c+t+m+b & =66 \ c-4 t & =0 \ 4 c+4 t+2 m+2 b & =252 \end{aligned}
We form the augmented matrix for this system and row-reduce
$$\left[\begin{array}{ccccc} 1 & 1 & 1 & 1 & 66 \ 1 & -4 & 0 & 0 & 0 \ 4 & 4 & 2 & 2 & 252 \end{array}\right] \stackrel{\text { RREF }}{\longrightarrow}\left[\begin{array}{ccccc} 1 & 0 & 0 & 0 & 48 \ 0 & 1 & 0 & 0 & 12 \ 0 & 0 & 1 & 1 & 6 \end{array}\right]$$
The first row of the matrix represents the equation $c=48$, so there are 48 cars. The second row of the matrix represents the equation $t=12$, so there are 12 trucks. The third row of the matrix represents the equation $m+b=6$ so there are anywhere from 0 to 6 bicycles. We can also say that $b$ is a free variable, but the context of the problem limits it to 7 integer values since you cannot have a negative number of motorcycles.

A wholesaler supplies products to 10 retail stores, each of which will independently make an order on a given day with chance $0.35$. What is the probability of getting exactly 2 orders? Find the most probable number of orders per day and the probability of this number of orders. Find the expected number of orders per day.

Using the independence of orders the chance that only the first two stores place orders is $0.35^2 \cdot 0.65^8$. As there are $10 \times 9 / 2=45$ distinct pairs of stores that could order we have
$$P(X=2)=450.35^2 0.65^8=0.1757$$
A similar argument works for any number of orders. We say that the number of orders placed has the $\operatorname{Bin}(10,0.35)$ distribution. The formula for $x$ orders is
$$P(X=x)=\left(\begin{array}{c} 10 \ x \end{array}\right) 0.35^x 0.65^{10-x}$$
The most probable number of orders is 3 (either calculate $P(X=x)$ for a few different $x$ values or look at binomial tables in a textbook) and $P(X=3)=120(0.35)^3(0.65)^7 \approx 0.2522$. The expected number of orders is
$$\sum_0^{10} x \cdot P(X=x)=1 \cdot 0.0725+2 \cdot 0.1757+3 \cdot 0.2522+4 \cdot 0.2377+\cdots$$
which (barring numericals errors) will give the same answer as the formula $E(X)=n p=10 \times 0.35=$ $3.5$.
(The problem of which number is most likely for general $n$ and $p$ was not set but is not all that hard – show that $P(X=x+1)(n+1) p$ and think about what that means $)$

Consider the system of linear equations $\mathcal{L S}(A, \mathbf{b})$, and suppose that every element of the vector of constants $\mathbf{b}$ is a common multiple of the corresponding element of a certain column of $A$. More precisely, there is a complex number $\alpha$, and a column index $j$, such that $[\mathbf{b}]i=\alpha[A]{i j}$ for all $i$. Prove that the system is consistent.

The condition about the multiple of the column of constants will allow you to show that the following values form a solution of the system $\mathcal{C S}(A, \mathbf{b})$,
$$x_1=0 \quad x_2=0 \quad \ldots \quad x_{j-1}=0 \quad x_j=\alpha \quad x_{j+1}=0 \quad \ldots \quad x_{n-1}=0 \quad x_n=0$$
With one solution of the system known, we can say the system is consistent (Definition CS).
A more involved proof can be built using Theorem RCLS. Begin by proving that each of the three row operations (Definition RO) will convert the augmented matrix of the system into another matrix where column $j$ is $\alpha$ times the entry of the same row in the last column. In other words, the “column multiple property” is preserved under row operations. These proofs will get successively more involved as you work through the three operations.