# 纯粹数学的基础 Fundamentals of Pure Mathematics MATH08064

0

Since $\sigma$ is integrable over $J_{n}$ and continuous at the endpoints of $J_{n}, \int_{J_{n}} \sigma=\int_{K} 1_{J_{n}} \sigma$ by (i). So we can define $G_{n}$ on $K=[a, b]$ by
$$G_{n}(t)=\int_{a}^{t} 1_{J_{n}} \sigma .$$
The function $G_{n}$ is continuous since $\sigma$ is a continuous differential. By Theorem $1(\S 2.5),(17)$ implies
$$d G_{n}=1_{J_{n}} \sigma \text { for all } n .$$

$\sum_{n=1}^{\infty}\left|\Delta G_{n}\left(I_{n}\right)\right|<c$ for all $I_{n} \subseteq J_{n}$. Hence,
$$\sum_{n=1}^{\infty} \operatorname{diam} G_{n}(K) \leq c<\infty$$
Now for the sup norm in (5) of Theorem 2,
$$\left|G_{n}\right| \leq \operatorname{diam} G_{n}(K)$$

## MATH08064COURSE NOTES ：

Let $A_{n}^{}=A_{n} \cup A_{n+1} \cup \cdots$. Then $A_{n}^{} \searrow A$ where $A=\varlimsup \lim A_{n}$. Clearly $A_{n}, A_{n}^{}, A$ are all measurable. By (3) $$\int_{K} \sigma^{+}<\int_{K} 1_{A_{n}^{}} \sigma^{+}+\frac{1}{2^{n}}$$
since $1_{A_{n}} \sigma \leq 1_{A_{n}} \sigma^{+} \leq 1_{A_{n}^{*}} \sigma^{+}$. (3) also implies
$$\int_{K} 1_{A_{n}} \sigma^{-}<\frac{1}{2^{n}}$$
since $1_{A_{n}} \sigma^{-} \leq 1_{A_{n}} \sigma^{-}+\left(1-1_{A_{n}}\right) \sigma^{+}=\sigma^{+}-1_{A_{n}} \sigma$.

# 证明和问题的解决 Proofs and Problem Solving MATH08059

0

Proof 1. the number of lattice paths reaching $(k, n-k)$ is $\left(\begin{array}{l}n \ k\end{array}\right)$. Each path arrives at $(k, n-k)$ from exactly one of the points $(k, n-k-1)$ and $(k-1, n-k)$. By Proposition $5.21$ again, there are $\left(\begin{array}{c}n-1 \ k\end{array}\right)$ paths of the first type and $\left(\begin{array}{c}n-1 \ k-1\end{array}\right)$ paths of the second type.

Proof 2 . Using the subset model, we count the $k$-sets in [ $n$ ]. There are $\left(\begin{array}{c}n-1 \ k\end{array}\right)$ such sets not containing $n$ and $\left(\begin{array}{c}n-1 \ k-1\end{array}\right)$ such sets containing $n$.

Proof 3. $(1+x)^{n}=(1+x)(1+x)^{n-1}$. Using the Binomial Theorem, we expand both $(1+x)^{n}$ and $(1+x)^{n-1}$ to obtain

$$\sum_{k=0}^{n}\left(\begin{array}{l} n \ k \end{array}\right) x^{k}=(1+x) \sum_{k=0}^{n-1}\left(\begin{array}{c} n-1 \ k \end{array}\right) x^{k}=\sum_{k=0}^{n-1}\left(\begin{array}{c} n-1 \ k \end{array}\right) x^{k}+\sum_{k=0}^{n-1}\left(\begin{array}{c} n-1 \ k \end{array}\right) x^{k+1}$$
Shifting the index in the last summation yields $\sum_{k=1}^{n}\left(\begin{array}{c}n-1 \ k-1\end{array}\right) x^{k}$. Since $\left(\begin{array}{c}n-1 \ n\end{array}\right)=$ $\left(\begin{array}{c}n-1 \ -1\end{array}\right)=0$, we can add $\left(\begin{array}{c}n-1 \ n\end{array}\right)$ to the first sum and $\left(\begin{array}{c}n-1 \ -1\end{array}\right)$ to the second to obtain
$$\sum_{k=0}^{n}\left(\begin{array}{l} n \ k \end{array}\right) x^{k}=\sum_{k=0}^{n}\left[\left(\begin{array}{c} n-1 \ k \end{array}\right)+\left(\begin{array}{l} n-1 \ k-1 \end{array}\right)\right] x^{k}$$

## MATH08059COURSE NOTES ：

there is a polynomial $g$ such that
$$k !\left(\begin{array}{l} i \ k \end{array}\right)=i(i-1) \cdots(i-k+1)=i^{k}-\left(\begin{array}{c} k \ 2 \end{array}\right) i^{k-1}+g(i),$$
with $g$ of degree at most $k-2$. Solving for $i^{k}$ yields $i^{k}=k !\left(\begin{array}{l}i \ k\end{array}\right)+\left(\begin{array}{l}k \ 2\end{array}\right) i^{k-1}-g(i)$.
We use induction on $k$. For $k=1$, the formula $\sum_{i=1}^{n} i=\frac{1}{2} n^{2}+\frac{1}{2} n$ agrees with the claim. For $k>1$, we have
$$\sum_{i=1}^{n} i^{k}=k ! \sum_{i=1}^{n}\left(\begin{array}{l} i \ k \end{array}\right)+\left(\begin{array}{c} k \ 2 \end{array}\right) \sum_{i=1}^{n} i^{k-1}-\sum_{i=1}^{n} g(i)$$

# 微积分应用 Calculus and its Applications MATH08058

0

\text { Evaluate } \int \frac{x}{\sqrt{\left(x^{4}+4\right)}} d x

Put $\mathrm{x}^{2}=\mathrm{t} ; \therefore 2 \mathrm{x} d \mathrm{~d}=\mathrm{dt}$.
\begin{aligned} &\therefore \text { the given integral }=\frac{1}{2} \int \frac{1}{\sqrt{\left(t^{2}+4\right)}} d t \ &=\frac{1}{2} \sinh ^{-1}\left(\frac{t}{2}\right)=\frac{1}{2} \sinh ^{-1}\left(\frac{x^{2}}{2}\right) . \end{aligned}

## MATH08058  COURSE NOTES ：

Put $x^{3}=t, \therefore 3 x^{2} d x=d t$.
\begin{aligned} &\therefore \text { the given integral }=\frac{1}{3} \int \frac{d t}{\sqrt{\left(t^{2}-9\right)}} \ &=\frac{1}{3} \cosh ^{-1}\left(\frac{t}{3}\right)=\frac{1}{3} \cosh ^{-1}\left(\frac{x^{3}}{3}\right) . \end{aligned}

# 微积分及其运用 Calculus and its Applications MATH08058

0

At time $t$ the coordinates of a car are $\phi(t)=\left(t^{2}, t^{3}\right)$. We wish to determine how fast, and in what direction, the car is heading at time $t=2$. First, we differentiate the parameterization:
$$\phi^{\prime}(t)=\left\langle 2 t, 3 t^{2}\right\rangle$$
Thus,
$$\phi(2)=\langle 4,12\rangle$$

The direction of the car is thus the direction that this vector is pointing. Its speed is given by the magnitude of this vector:
$$|\langle 4,12\rangle|=\sqrt{4^{2}+12^{2}}=4 \sqrt{10}$$

## MATH08051  COURSE NOTES ：

We find two vectors tangent to the graph of $z=x^{2}+y^{3}$ This surface is parameterized by
$$\Psi(x, y)=\left(x, y, x^{2}+y^{3}\right)$$
The desired point is at $\Psi(2,1)$. To find two tangent vectors we simply take the partial derivatives of $\Psi$ and evaluate at $(2,1)$.
$$\frac{\partial \Psi}{\partial x}=\langle 1,0,2 x\rangle$$
and so,
$$\frac{\partial \Psi}{\partial x}(2,1)=\langle 1,0,4\rangle$$
Similarly,
$$\frac{\partial \Psi}{\partial y}=\left\langle 0,1,3 y^{2}\right\rangle$$
and so,
$$\frac{\partial \Psi}{\partial y}(2,1)=\langle 0,1,3\rangle$$
We conclude $\langle 1,0,4\rangle$ and $\langle 0,1,3\rangle$ are two vectors tangent to the graph of $z=x^{2}+y^{3}$

# 统计 Statistics MATH08051

0

\begin{aligned} &H_{0}: \mu=0.86 \ &H_{a}: \mu \neq 0.86 \end{aligned}
at the $1 \%$ level of significance. What is the power of this test against the specific alternative $\mu=0.845$ ?
The test rejects $H_{0}$ when $|z| \geq 2.576$. The test statistic is
$$z=\frac{\bar{x}-0.86}{0.0068 / \sqrt{3}}$$

Some arithmetic shows that the test rejects when either of the following is true:
$$\begin{array}{ll} z \geq 2.576 & \text { (in other words, } \bar{x} \geq 0.870 \text { ) } \ z \leq-2.576 & \text { (in other words, } \bar{x} \leq 0.850 \text { ) } \end{array}$$
These are disjoint events, so the power is the sum of their probabilities, computed assuming that the alternative $\mu=0.845$ is true. We find that
\begin{aligned} P(\bar{x} \geq 0.87) &=P\left(\frac{\bar{x}-\mu}{\sigma / \sqrt{n}} \geq \frac{0.87-0.845}{0.0068 / \sqrt{3}}\right) \ &=P(Z \geq 6.37) \doteq 0 \end{aligned}

## MATH08051  COURSE NOTES ：

The sample mean is $\bar{x}=5$ and the standard deviation is $s=3.63$ with degrees of freedom $n-1=7$. The standard error is
$$\mathrm{SE}_{\bar{x}}=s / \sqrt{n}=3.63 / \sqrt{8}=1.28$$
From Table D we find $t^{}=2.365$. The $95 \%$ confidence interval is \begin{aligned} \bar{x} \pm t^{} \frac{s}{\sqrt{n}} &=5.0 \pm 2.365 \frac{3.63}{\sqrt{8}} \ &=5.0 \pm(2.365)(1.28) \ &=5.0 \pm 3.0 \ &=(2.0,8.0) \end{aligned}

# 数学微观经济学1 Mathematical Microeconomics 1 ECNM11073

0

Given a metric space $\mathbf{X}$ and a distance $d(\mathbf{x}, \mathbf{y})$ defined on $\mathbf{X}^{n}$, an $\epsilon$-neighbourhood (or $\epsilon$-ball) of the point $\mathbf{x} \in \mathbf{X}$ is given by:
$$N_{\epsilon}(\mathbf{x})={\mathbf{y} \in \mathbf{X} / d(\mathbf{x}, \mathbf{y})<\epsilon}$$
and $\epsilon$ is a real finite positive number (usually small).

shows an $\epsilon$-neighbourhood in $\mathbf{E}^{1}$, and Fig. $3.5(b)$ in $\mathbf{E}^{2}$. We also define a deleted neighbourhood of $x$ as $N_{-}^{\prime} \epsilon(x)=N_{\epsilon}(x)-{x}$, i.e. the $\epsilon$-neighbourhood minus the point $x$ itself.

## ECNM11073 COURSE NOTES ：

The solution of a system of linear inequalities:
$$a_{i 1} x_{1}+a_{i 2} x_{2}+\ldots+a_{i n} x_{n} \leqslant b_{i} \quad i=1, \ldots, m \quad \forall x_{i} \geqslant 0$$
form a convex set in $\mathbf{R}^{n}$. This can be shown as follows. Let $C=$ ${x / A x0}$ and let $x, y \in C$ then
$\mathbf{A}[a x+(1-a) y] \leqslant a b+(1-a) b=b$, i.e. $\mathbf{A x}0$. Therefore $\mathbf{z} \in \mathbf{C}$, hence the solution set is convex.

# 高级数学经济学 Advanced Mathematical Economics ECNM10085/ECNM11072

0

The dual to the program that defines $y^{}$ is $$\max {c x-f(0) t: A x-t b \leq d,(x, t) \geq 0}$$ Let $\left(x^{}, t^{}\right)$ be the optimal solution this program. By the duality theorem $$c x^{}-f(0) t^{}=d y^{}$$

Let $x^{0}$ be an optimal solution to $(\mathrm{P})$. Choose $\epsilon \leq 1 / t^{}$, when $t^{}=0$, take $\epsilon$ to be any positive number. Consider
$$x=(1-t \epsilon) x^{0}+\epsilon x^{}$$ Since $x \geq 0$ and $A x \leq b+\epsilon d$ it follows that $x$ is a feasible solution to the program that defines $f(\epsilon)$. Hence, \begin{aligned} f(\epsilon) \geq c x &=(1-t \epsilon) c x^{0}+\epsilon c x^{} \ &=(1-t \epsilon) f(0)+\epsilon\left(d y^{}-f(0) t^{}\right)=f(0)+\epsilon d y^{*} \end{aligned}

## ECNM10085/ECNM11072 COURSE NOTES ：

\begin{aligned} &\min \left[\max {i} \sum{j=1}^{n}\left(a_{i j} y_{j}\right)\right] \ &\sum_{j=1}^{n} y_{j}=1, \quad y_{j} \geq 0 \end{aligned}
This is not a linear program but can be transformed into one (which we call LPC)
$\min R \quad$ (the mini-max value)
\begin{aligned} &\text { s.t. } \sum_{j=1}^{n}\left(a_{i j} y_{j}\right) \leq R, \ &\sum_{j=1}^{n} y_{j}=1, \quad y_{j} \geq 0 . \end{aligned}