线性代数作业代写Linear Algebra代考

0

内积空间Inner product space代写

• 凸优化convex analysis
• 控制理论Control theory
• 数学方法Mathematical methods
• 优化理论 optimazation

线性代数Linear Algebra的历史

The procedure for solving simultaneous linear equations (using counting rods) is now known as Gaussian elimination and appears in Chapter 8 of the ancient Chinese mathematical texts. Chapter 8 of Nine Chapters on the Art of Mathematics: Rectangular Arrays. Its use is illustrated in eighteen problems, with two to five equations.

In Europe, systems of linear equations arose with the introduction of coordinates into geometry by René Descartes in 1637. Indeed, in this new geometry, now known as Cartesian geometry, lines and planes are represented by linear equations, and calculating their intersection is equivalent to solving a system of linear equations.

The first systematic approach to solving linear systems was the use of determinants, first considered by Leibniz in 1693, and in 1750 Gabriel Cramer used them to give explicit solutions to linear systems, now known as Cramer’s rules. Later, Gauss further described the elimination method, which was originally classified as an advance in geodesy.

线性代数Linear Algebra课后作业代写

If $X$ and $Y$ are vectors in $\mathbb{R}^{3}$, then
$$|X \cdot Y| \leq|X| \cdot|Y| .$$
Moreover if $X \neq 0$ and $Y \neq 0$, then
$$\begin{gathered} X \cdot Y=|X| \cdot|Y| \Leftrightarrow Y=t X, t>0, \ X \cdot Y=-|X| \cdot|Y| \Leftrightarrow Y=t X, t<0 . \end{gathered}$$ Proof. If $X=0$, then inequality $8.3$ is trivially true. So assume $X \neq 0$. Now if $t$ is any real number, by equation $8.2$, \begin{aligned} 0 \leq|t X-Y|^{2} &=|t X|^{2}-2(t X) \cdot Y+|Y|^{2} \\ &=t^{2}|X|^{2}-2(X \cdot Y) t+|Y|^{2} \\ &=a t^{2}-2 b t+c_{,} \end{aligned} where $a=|X|^{2}>0, b=X \cdot Y, c=|Y|^{2}$.
Hence
\begin{aligned} a\left(t^{2}-\frac{2 b}{a} t+\frac{c}{a}\right) & \geq 0 \ \left(t-\frac{b}{a}\right)^{2}+\frac{c a-b^{2}}{a^{2}} & \geq 0 . \end{aligned}
Substituting $t=b / a$ in the last inequality then gives
$$\frac{a c-b^{2}}{a^{2}} \geq 0$$
so
$$|b| \leq \sqrt{a c}=\sqrt{a} \sqrt{c}$$
and hence inequality $8.3$ follows.
To discuss equality in the Cauchy-Schwarz inequality, assume $X \neq 0$ and $Y \neq 0$.
Then if $X \cdot Y=|X| \cdot|Y|$, wẻ haveẻ for âll $t$
\begin{aligned} |t X-Y|^{2} &=t^{2}|X|^{2}-2 t X \cdot Y+|Y|^{2} \ &=t^{2}|X|^{2}-2 t|X| \cdot|Y|+|Y|^{2} \ &=|t X-Y|^{2} . \end{aligned}
Taking $t=|X| /|Y|$ then gives $|t X-Y|^{2}=0$ and hence $t X-Y=0$. Hence $Y=t X$, where $t>0$. The case $X \cdot Y=-|X| \cdot | Y$ is proved similarly.

微积分作业代写calculus代考

0

代写微积分作业代写calculus

微分微积分integral calculus

• 凸优化convex analysis
• 控制理论Control theory
• 数学方法Mathematical methods
• 优化理论 optimazation

微积分integral calculus的历史

The infinitesimal calculus was developed independently by Isaac Newton and Gottfried Wilhelm Leibniz in the late 17th century. Later work, including the codification of limit ideas, put these developments on a firmer conceptual footing. Today, calculus has a wide range of applications in science, engineering and economics.

In mathematics education, calculus refers to a course in elementary mathematical analysis devoted primarily to the study of functions and limits. The word calculus is Latin for ‘little stone’ (a contraction of calx, meaning ‘stone’). Because such pebbles were used to calculate distances, count votes and perform abacus arithmetic, the word implies a method of calculation. In this sense it was in use in English at least as early as 1672, a few years before Leibniz and Newton published their articles. In addition to differential and integral calculus, the term has also been used to name specific computational methods and related theories such as propositional calculus, Ricardian calculus, calculus of variations, Lambda calculus and process calculus.

微积分integral calculus课后作业代写

1.1.1 Convergence and Divergence
In certain sequences the $n^{\text {th }}$ term comes closer and closer to a particular number as $n$ becomes larger and larger. For example, in the sequence $\left(\frac{1}{n}\right)$, the $n^{\text {th }}$ term comes closer and closer to 0 , whereas in $\left(\frac{n}{n+1}\right)$, the $n^{\text {th }}$ term comes closer and closer to 1 as $n$ becomes larger and larger. If you look at the sequence $\left((-1)^{n}\right)$, the terms oscillate between $-1$ and 1 as $n$ varies, whereas in $\left(n^{2}\right)$ the terms become larger and larger.
Now, we make precise the the statement ” $a_{n}$ comes closer and closer to a number $a$ as $n$ becomes larger and larger”, that is, ” $a_{n}$ can be made arbitrarily close to $a$ by taking $n$ large enough”, by defining the notion of convergence of a sequence.

Definition 1.1.2 A sequence $\left(a_{n}\right)$ of real numbers is said to converge to a real number $a$ if for every $\varepsilon>0$, there exists a positive integer $N$, that may depend on $\varepsilon$, such that
$$\left|a_{n}-a\right|<\varepsilon \quad \forall n \geq N .$$
A sequence that converges is called a convergent sequence, and a sequence that does not converge is called a divergent sequence.
Notation 1.1.2 (i) If $\left(a_{n}\right)$ converges to $a$, then we write
$$a_{n} \rightarrow a \text { as } n \rightarrow \infty$$
that we may read as ” $a_{n}$ tends to $a$ as $n$ tends to infinity”, that we also write in short as $a_{n} \rightarrow a$.
(ii) If $\left(a_{n}\right)$ does not converge to $a$, then we write $a_{n} \nrightarrow a$.
Remark 1.1.3 We must keep in mind that the symbol $\infty$ is not a number; it is only a notation used in the context of describing some properties of real numbers, such as in Definition 1.1.2.

Remark 1.1.4 In Definition 1.1.2, the expression $\left|a_{n}-a\right|<\varepsilon$ can be replaced by $\left|a_{n}-a\right| \leq \varepsilon$ or by $\left|a_{n}-a\right|0$. In other words, the following statements are equivalent.
(i) For every $\varepsilon>0$, there exists $N \in \mathbb{N}$ such that $\left|a_{n}-a\right|<\varepsilon$ for all $n \geq N$. (ii) For every $\varepsilon>0$, there exists $N \in \mathbb{N}$ such that $\left|a_{n}-a\right| \leq \varepsilon$ for all $n \geq N$.
(iii) For every $\varepsilon>0$, there exists $N \in \mathbb{N}$ such that $\left|a_{n}-a\right| \leq c_{0} \varepsilon$ for all $n \geq N$ for some $c_{0}>0$.

Clearly, (i) implies (ii). To see (ii) implies (i), assume (ii) and let $\varepsilon>0$ be given. Then, by (ii), with $\varepsilon / 2$ in place of $\varepsilon$, there exists $N \in \mathbb{N}$ such that $\left|a_{n}-a\right| \leq \varepsilon / 2$ for all $n \geq N$. In particular, (i) holds. Now, (iii) follows from (i) by taking $c_{0} \varepsilon$ in place of $\varepsilon$, and (i) follows from (iii) by taking $\varepsilon / c_{0}$ in place of $\varepsilon$.

Before further discussion on convergence of sequences, let us observe an important property of convergent sequences.

离散数学作业代写discrect mathematics代考

0

代写离散数学作业代写discrect mathematics

抽象代数代写

• 凸优化convex analysis
• 控制理论Control theory
• 数学方法Mathematical methods
• 优化理论 optimazation

离散数学的历史

Proposition (The Pigeonhole Principle). If $n m+1$ objects are placed into $n$ boxes then some box contains more than $m$ objects.

Proof. Assume not. Then each box has at most $m$ objects so the total number of objects is $n m$ – a contradiction.
A few examples of its use may be helpful.
Example. In a sequence of at least $k l+1$ distinct numbers there is either an increasing subsequence of length at least $k+1$ or a decreasing subsequence of length at least $l+1$.
Solution. Let the sequence be $c_{1}, c_{2}, \ldots, c_{k l+1}$. For each position let $a_{i}$ be the length of the longest increasing subsequence starting with $c_{i}$. Let $d_{j}$ be the length of the longest decreasing subsequence starting with $c_{j}$. If $a_{i} \leq k$ and $d_{i} \leq l$ then there are only at most $k l$ distinct pairs $\left(a_{i}, d_{j}\right)$. Thus we have $a_{r}=a_{s}$ and $d_{r}=d_{s}$ for some $1 \leq r<s \leq k l+1$. This is impossible, for if $c_{r}<c_{s}$ then $a_{r}>a_{s}$ and if $c_{r}>c_{s}$ then $d_{r}>d_{s}$. Hence either some $a_{i}>k$ or $d_{j}>l$.

Example. In a group of 6 people any two are either friends or enemies. Then there are either 3 mutual friends or 3 mutual enemies.

Solution. Fix a person $X$. Then $X$ has either 3 friends or 3 enemies. Assume the former. If a couple of friends of $X$ are friends of each other then we have 3 mutual friends. Otherwise, $X$ ‘s 3 friends are mutual enemies.

Dirichlet used the pigeonhole principle to prove that for any irrational $\alpha$ there are infinitely many rationals $\frac{p}{q}$ satisfying $\left|\alpha-\frac{p}{q}\right|<\frac{1}{q^{2}}$.

离散数学discrect mathematics课后作业代写

2.3 Strong Principle of Mathematical Induction
Proposition (Strong Principle of Induction). If $P(n)$ is a statement about $n$ for each $n \in \mathbb{N}{0}, P\left(k{0}\right)$ is true for some $k_{0} \in \mathbb{N}{0}$ and the truth of $P(k)$ is implied by the truth of $P\left(k{0}\right), P\left(k_{0}+1\right), \ldots, P(k-1)$ then $P(n)$ is true for all $n \in \mathbb{N}{0}$ such that $n \geq k{0}$.
The proof is more or less as before.
Example (Evolutionary Trees). Every organism can mutate and produce 2 new versions. Then $n$ mutations are required to produce $n+1$ end products.

Proof. Let $P(n)$ be the statement ” $n$ mutations are required to produce $n+1$ end products”. $P_{0}$ is clear. Consider a tree with $k+1$ end products. The first mutation (the root) produces 2 trees, say with $k_{1}+1$ and $k_{2}+1$ end products with $k_{1}, k_{2}<k$. Then $k+1=k_{1}+1+k_{2}+1$ so $k=k_{1}+k_{2}+1$. If both $P\left(k_{1}\right)$ and $P\left(k_{2}\right)$ are true then there are $k_{1}$ mutations on the left and $k_{2}$ on the right. So in total we have $k_{1}+k_{2}+1$ mutations in our tree and $P(k)$ is true is $P\left(k_{1}\right)$ and $P\left(k_{2}\right)$ are true. Hence $P(n)$ is true for all $n \in \mathbb{N}_{0}$.

线性规划作业代写linear programming代考

0

代写线性规划作业linear programming

（1）线性规划的可行域总是一个凸集

（2）目标函数的可行解(包括最优解)一定出现在可行域的一个顶点上

（3）目标函数可以是直线(二维空间)或者超平面(高维空间)的线性变化，所以它的局部最优解实际上就是全局最优解

• 凸优化convex analysis
• 控制理论Control theory
• 数学方法Mathematical methods
• 优化理论 optimazation

线性规划的历史

The first example of a linear programming problem in $n$ variables and $n$ constraints taking $2^{n}-1$ iterations to solve was published by Klee \& Minty (1972). Several researchers, including Smale (1983), Borgwardt (1982), Borgwardt (1987a), Adler \& Megiddo (1985), and Todd (1986), have studied the average number of iterations. For a survey of probabilistic methods, the reader should consult Borgwardt (1987b).

Roughly speaking, a class of problems is said to have polynomial complexity if there is a polynomial $p$ for which every problem of “size” $n$ in the class can be solved by some algorithm in at most $p(n)$ operations. For many years it was unknown whether linear programming had polynomial complexity. The Klee-Minty examples

show that, if linear programming is polynomial, then the simplex method is not the algorithm that gives the polynomial bound, since $2^{n}$ is not dominated by any polynomial. In 1979, Khachian (1979) gave a new algorithm for linear programming, called the ellipsoid method, which is polynomial and therefore established once and for all that linear programming has polynomial complexity. The collection of all problem classes having polynomial complexity is usually denoted by $\mathcal{P}$. A class of problems is said to belong to the class $\mathcal{N} \mathcal{P}$ if, given a (proposed) solution, one can verify its optimality in a number of operations that is bounded by some polynomial in the “size” of the problem. Clearly, $\mathcal{P} \subset \mathcal{N} \mathcal{P}$ (since, if we can solve from scratch in a polynomial amount of time, surely we can verify optimality at least that fast). An important problem in theoretical computer science is to determine whether or not $\mathcal{P}$ is a strict subset of $\mathcal{N} \mathcal{P}$.

The study of how difficult it is to solve a class of problems is called complexity theory. Readers interested in pursuing this subject further should consult Garey \& Johnson (1977).

$n$ 变量和 $n$ 约束中的线性规划问题的第一个例子采用 $2^{n}-1$ 次迭代来解决，由 Klee \&薄荷糖 (1972)。几位研究人员，包括 Smale (1983)、Borgwardt (1982)、Borgwardt (1987a)、Adler \& Megiddo (1985) 和 Todd (1986) 研究了平均迭代次数。对于概率方法的调查，参考 Borgwardt 。

线性规划linear programming课后作业代写

maximize $4 x_{1}+x_{2}+3 x_{3}$
subject to $x_{1}+4 x_{2} \leq 1$
\begin{aligned} 3 x_{1}-x_{2}+x_{3} & \leq 3 \ x_{1}, x_{2}, x_{3} & \geq 0 \end{aligned}

Our first observation is that every feasible solution provides a lower bound on the optimal objective function value, $\zeta^{}$. For example, the solution $\left(x_{1}, x_{2}, x_{3}\right)=(1,0,0)$ tells us that $\zeta^{} \geq 4$. Using the feasible solution $\left(x_{1}, x_{2}, x_{3}\right)=(0,0,3)$, we see that $\zeta^{*} \geq 9$. But how good is this bound? Is it close to the optimal value? To answer, we need to give upper bounds, which we can find as follows. Let’s multiply the first constraint by 2 and add that to 3 times the second constraint:
\begin{aligned} 2\left(x_{1}+4 x_{2} \quad\right) & \leq 2(1) \ +3\left(3 x_{1}-x_{2}+x_{3}\right) & \leq 3(3) \ 11 x_{1}+5 x_{2}+3 x_{3} & \leq 11 \end{aligned}
Now, since each variable is nonnegative, we can compare the sum against the objective function and notice that
$$4 x_{1}+x_{2}+3 x_{3} \leq 11 x_{1}+5 x_{2}+3 x_{3} \leq 11$$