# 统计学中的计算方法| Computing in Statistics代写 MT4113

0

Given a joint probability density $f(\mathbf{y} \mid \theta)$ for an $n \times 1$ observed data vector $\mathbf{y}$ and unknown $p \times 1$ parameter vector $\theta$, denote the $n \times p$ matrix of first derivatives with respect to $\theta$ as
$$g(\theta \mid \mathbf{y})=\partial \ln [f(\mathbf{y} \mid \theta)] / \partial \theta$$
and the $p \times p$ matrix of second derivatives as

$$\mathbf{H}=\mathbf{H}(\theta \mid \mathbf{y})=\partial^{2} \ln [f(\mathbf{y} \mid \boldsymbol{\theta})] / \partial \boldsymbol{\theta} \partial \boldsymbol{\theta}^{\prime} .$$
Then the Hessian is $\mathbf{H}$, normally considered to be the estimate
$$E\left[g(\theta \mid \mathbf{y}) g(\theta \mid \mathbf{y})^{\prime}\right]=E[\mathbf{H}(\theta \mid \mathbf{y})] .$$

## MT4113 COURSE NOTES ：

$$\frac{\partial \ell(\beta)}{\partial \beta}=\sum_{i} \mathbf{x}{i} y{i}-\sum_{i} \mathbf{x}{i} \hat{y}{i}$$
where $\hat{y}{i}$ is the predicted value of $y{i}$ :
$$\hat{y}{i}=\frac{1}{1+\exp \left(-\beta \mathbf{x}{i}\right)} .$$
The next step is to set the derivative equal to 0 and solve for $\beta$ :
$$\sum_{i} \mathbf{x}{i} y{i}-\sum_{i} \mathbf{x}{i} \hat{y}{i}=0 \text {. }$$

# 数学中的运算| Computing in Mathematics代写 MT4112

0

If $S$ is an integral domain, its normalization $\bar{S}$ is the integral closure of $S$ in the quotient field of $S$. An important finiteness result of Emmy Noether tells us that if $S$ is an affine domain, then $\bar{S}$ is a finitely generated $S$-module; in particular, $\bar{S}$ is again an affine domain (see Eisenbud (1995), Corollary 13.13). In other words, if $S$ is of type $S=K\left[x_{1}, \ldots, x_{n}\right] / P$, where $P$ is a prime ideal, then $\bar{S}$ is of type $K\left[y_{1}, \ldots, y_{m}\right] / P^{\prime}$, where $P^{\prime}$ is a prime ideal. To compute the normalization means to find such a representation for $\bar{S}$ together with the normalization map
$$S=K\left[x_{1}, \ldots, x_{n}\right] / P \hookrightarrow \bar{S}=K\left[y_{1}, \ldots, y_{m}\right] / P^{\prime}$$

More generally, if $S$ is any reduced ring, its normalization $\bar{S}$ is the integral closure of $S$ in the total quotient ring of $S$. In conjunction with Noether’s finiteness result, the theorem on the splitting of normalization tells us that if $S$ is a reduced affine ring, then $\bar{S}$ may be written as a product of affine domains. More precisely, if $S$ is of type $S=K\left[x_{1}, \ldots, x_{n}\right] / I$, where $I$ is a radical ideal, and if $P_{1}, \ldots, P_{s}$ are the (minimal) associated primes of $I$, then
$$\bar{S} \cong \overline{\left(K\left[x_{1}, \ldots, x_{n}\right] / P_{1}\right)} \times \cdots \times \overline{\left(K\left[x_{1}, \ldots, x_{n}\right] / P_{s}\right)}$$

## MT4112 COURSE NOTES ：

The degrees $d_{j}$ of the secondary invariants and their number $\mu_{j}$ in each degree are uniquely determined as the exponents and the coefficients of the polynomial
$$\sum_{j=0}^{b} \mu_{j} t^{d_{j}}:=H_{K[x]^{c}}(t) \cdot \prod_{i=1}^{n}\left(1-t^{\operatorname{deg}\left(p_{i}\right)}\right)$$
The total number of secondary invariants is
$$\frac{1}{|G|} \prod_{i=1}^{n} \operatorname{deg}\left(p_{i}\right)$$

# 线性和非线性波|Linear and Nonlinear Waves代写 MT4005

0

is equivalent to the pair of equations
$$\begin{gathered} v_{n}(t+\Delta)=G\left(h_{n}\right) \ \frac{d h_{n}}{d t}=v_{n-1}(t)-v_{n}(t) \end{gathered}$$

In this form we introduce continuous functions $v(x, t)$ and $h(x, t)$ such that
$$\begin{array}{r} v\left(s_{n}, t\right)=v_{n}(t), \ h\left(\frac{s_{n-1}+s_{n}}{2}, t\right)=h_{n}(t) \end{array}$$

## MT4005 COURSE NOTES ：

There is a possible singularity if
$$(V-1)^{2}-A^{2}=0,$$
and this would be a singularity on a curve $\xi=$ constant. The original equations are hyperbolic so we know that singularities can occur only on the characteristics
$$\frac{d r}{d t}=u \pm a=\frac{n r}{t}(V \pm A) .$$
On a curve $\xi=$ constant, we have
$$\frac{d r}{d t}=\frac{n r}{t} .$$

# 实物和抽象分析|Real and Abstract Analysis代写 MT4004

0

$$-\frac{f^{(N+1)}(\xi)}{N !}(x-\xi)^{N}=g^{\prime}(\xi)=\frac{g(x)-g(a)}{x-a} .$$
Then, as $g(x)=0$,
$$g(a)=\frac{f^{(N+1)}(\xi)}{N !}(x-\xi)^{N}(x-a)$$

The expression
$$\sum_{n=0}^{N} \frac{f^{(n)}(a)}{n !}(x-a)^{n}$$
is called the Taylor polynomial of degree $n$ at $a$, and
$$f(x)-\sum_{n=0}^{N} \frac{f^{(n)}(a)}{n !}(x-a)^{n}$$

## MT4004 COURSE NOTES ：

Let $f$ be Riemann integrable over $I=[a, b]$, and define
$$F(x)=\int_{a}^{x} f \quad(a \leq x \leq b) .$$
Prove that $F$ is continuous on $I$. Prove also that if $x_{0} \in I$ and
$$\lim {x \rightarrow x{0}, x \in I} f(x)=f\left(x_{0}\right),$$
then
$$\lim {x \rightarrow x{0}, x \in I} \frac{F(x)-F\left(x_{0}\right)}{x-x_{0}}=f\left(x_{0}\right)$$

# 数值分析|Numerical Analysis代写 MT3802

0

Suppose that for a matrix $A \in \mathbb{R}^{n \times n}$,
$$\sum_{i=1}^{n}\left|a_{i j}\right| \leq C, \quad j=1,2, \ldots, n$$
Show that, for any vector $\boldsymbol{x} \in \mathbb{R}^{n}$,
$$\sum_{i=1}^{n}\left|(A \boldsymbol{x}){i}\right| \leq C|\boldsymbol{x}|{1}$$

Find a nonzero vector $\boldsymbol{x}$ for which equality can be achieved, and deduce that
$$|A|_{1}=\max {j=1}^{n} \sum{i=1}^{n}\left|a_{i j}\right| .$$
(i) Show that, for any vector $v=\left(v_{1}, \ldots, v_{n}\right)^{\mathrm{T}} \in \mathbb{R}^{n}$, $|v|_{\infty} \leq|v|_{2}$ and $|v|_{2}^{2} \leq|v|_{1}|v|_{\infty}$.

## MT3802 COURSE NOTES ：

Now suppose that $A \in \mathbb{R}^{n \times n}$ with $|A|<1$. Show that
$$(I-A)^{-1}=I+A(I-A)^{-1} \text {, }$$
and hence that
$$\left|(I-A)^{-1}\right| \leq 1+|A|\left|(I-A)^{-1}\right| .$$
Deduce that
$$\left|(I-A)^{-1}\right| \leq \frac{1}{1-|A|} .$$

# 应用统计学|Applied Statistics 代写 MT3508

0

It is possible to sketch a proof of the central limit theorem when the moment generating function of $X$ exists in an open interval containing $t=0$. Let $Z_{m}$ be the standardized form of the sum $S_{m}$,
$$Z_{m}=\frac{S_{m}-m \mu}{\sigma \sqrt{m}}=\sum_{i=1}^{m} \frac{X_{i}-\mu}{\sigma \sqrt{m}}$$
and $W_{i}=\left(X_{i}-\mu\right) /(\sigma \sqrt{m})$ for $i=1,2, \ldots, m$. Then $E\left(Z_{m}\right)=0, \operatorname{Var}\left(Z_{m}\right)=1$, and, for each $i, E\left(W_{i}\right)=0$ and $\operatorname{Var}\left(W_{i}\right)=E\left(W_{i}^{2}\right)=1 / \mathrm{m}$.

If $\mathrm{MGF}{m}(t)$ is the moment generating function of $Z{m}$ and MGF $(t)$ is the moment generating function of each $W_{i}$, then by Corollary $5.6$
$$\operatorname{MGF}{m}(t)=(\operatorname{MGF}(t))^{m}=\left(1+\frac{1}{2 m} t^{2}+\cdots\right)^{m} \text {, }$$ where the expression in parentheses on the right is the Maclaurin series expansion of $\operatorname{MGF}(t)$. For values of $t$ near zero, it can be shown that $$\lim {m \rightarrow \infty} \mathrm{MGF}{m}(t)=\lim {m \rightarrow \infty}\left(1+\frac{t^{2} / 2}{m}\right)^{m}=e^{t^{2} / 2}$$
The formula on the right is the moment generating function of the standard normal random variable.

## MT3508 COURSE NOTES ：

Let $Z_{1}, Z_{2}, \ldots, Z_{m}$ be independent standard normal random variables. Then
$$V=Z_{1}^{2}+Z_{2}^{2}+\cdots+Z_{m}^{2}$$
is said to be a chi-square random variable, or to have a chi-square distribution, with parameter $m$. The PDF of $V$ is as follows:
$$f(x)=\frac{1}{2^{m / 2} \Gamma(m / 2)} x^{(m / 2)-1} e^{-x / 2} \text { when } x>0 \text { and } 0 \text { otherwise. }$$
The number of independent summands, $m$, is called the degrees of freedom (df) of the chi-square distribution. The notation $\chi_{p}^{2}$ is used to denote the $p^{\text {th }}$ quantile of the distribution.

# 数学统计学|Mathematical Statistics 代写 MT3507

0

If $\mathbf{Z}=\mathbf{c}+\mathbf{A Y}$, where $\mathbf{Y}$ is a random vector and $\mathbf{A}$ is a fixed matrix and $\mathbf{c}$ is a fixed vector, then
$$E(\mathbf{Z})=\mathbf{c}+\mathbf{A} E(\mathbf{Y})$$
The $i$ th component of $\mathbf{Z}$ is
$$Z_{i}=c_{i}+\sum_{j=1}^{n} a_{i j} Y_{j}$$

By the linearity of the expectation,
$$E\left(Z_{i}\right)=c_{i}+\sum_{j=1}^{n} a_{i j} E\left(Y_{j}\right)$$

## MT3507 COURSE NOTES ：

$$\bar{X}=\frac{1}{n} \mathbf{1}^{T} \mathbf{X}$$
where 1 is a vector consisting of all ones. The vector consisting of entries all of which are $\bar{X}$ can thus be written as $(1 / n) 11^{T} \mathbf{X}$, and $\mathrm{A}$ can be written as
$$\mathbf{A}=\mathbf{I}-\frac{1}{n} \mathbf{1 1}^{T}$$
Thus,
$$\sum_{i=1}^{n}\left(X_{i}-\bar{X}\right)^{2}=|\mathbf{A} \mathbf{X}|^{2}=\mathbf{X}^{T} \mathbf{A}^{T} \mathbf{A} \mathbf{X}$$
The matrix $\mathbf{A}$ has some special properties. In particular, $\mathbf{A}$ is symmetric, and $\mathbf{A}^{2}=$ $\mathbf{A}$, as can be verified by simply multiplying $\mathbf{A}$ by $\mathbf{A}$, noting that $\mathbf{1}^{T} \mathbf{1}=n$. Thus,
$$\mathbf{X}^{T} \mathbf{A}^{T} \mathbf{A X}=\mathbf{X}^{T} \mathbf{A X}$$
and by Theorem $\mathrm{C}$,
$$E\left(\mathbf{X}^{T} \mathbf{A X}\right)=\sigma^{2} \operatorname{trace}(\mathbf{A})+\mu^{T} \mathbf{A} \mu$$

# 应用数学的技术|Techniques of Applied Mathematics 代写 MT3506

0

Proof. If $p q$ is null, the result is obvious (and vacuous) from . Let $p q$ be timelike and choose Minkowski normal coordinates $(t, x, y, z)$ for $N$, with origin at some point $r$, in $N$, lying to the past of $p$ along the extension of $p q$. Choose new coordinates for the region $\hat{N}$ given by $t>\left(x^{2}+y^{2}+z^{2}\right)^{1 / 2}$ as follows:
$$T=\left(t^{2}-x^{2}-y^{2}-z^{2}\right)^{1 / 2},$$

$$X^{1}=\frac{x}{t}, \quad X^{2}=\frac{y}{t}, \quad X^{3}=\frac{z}{t}$$
Since the curves $X^{1}, X^{2}, X^{3}=$ const. are timelike geodesics through $r$, and $T=$ const. are spacelike hypersurfaces orthogonal to these, where $T\left(={\Phi(r,)}^{1 / 2}\right)$ measures the length (i.e., proper time) on the geodesic from $r$. we have what is known as a synchronous coordinate system for $N$ (i.e., a Gaussian normal coordinate system in which the geodesics are timelike, being orthogonal to a system of spacelike coordinate hypersurfaces). The metric therefore has the form

## MT3506 COURSE NOTES ：

$$l\left(\gamma_{\xi^{\prime}}\right) \leqq l\left(\gamma_{\xi}\right) \text { if } \xi \subset \xi^{\prime}$$
by repeated application of $7.2$. Also, given $\xi, \xi^{\prime} \in \Xi$ we have
$$\left.l\left(\gamma_{c^{\prime}}\right) \leq \min \left(l \gamma_{\xi}\right), l\left(\gamma_{\xi}\right)\right) \text {, }$$
where $\xi^{\prime \prime}=\xi \cup \xi^{\prime}$. Finally, define $1: \mathscr{G}({p},{q}) \rightarrow \mathbb{R}$ by

# 代数：环和域|Algebra: Rings and Fields代写 MT3505

0

Proof. Multiplication in $R / A$ is well-defined, since if $r_{1}+A=r_{1}^{\prime}+A$ and $r_{2}+A=r_{2}^{\prime}+A$, then writing $r_{i}^{\prime}=r_{i}+a_{i}$ for $a_{i}$ in $A$ we see
\begin{aligned} r_{1}^{\prime} r_{2}^{\prime}+A &=\left(r_{1}+a_{1}\right)\left(r_{2}+a_{2}\right)+A \ &=r_{1} r_{2}+\left(r_{1} a_{2}+a_{1} r_{2}+a_{1} a_{2}\right)+A=r_{1} r_{2}+A . \end{aligned}

Associativity and distributivity are easy to verify in $R / A$, as a consequence of the respective axioms in $R$. Moreover,
$$(1+A)(r+A)=r+A=(r+A)(1+A),$$
so $1+A$ is the unit element of $R / A$. We already know $\varphi$ is a group homomorphism with respect to $+$, and $\varphi(1)=1+A$, and
$$\varphi\left(r_{1}\right) \varphi\left(r_{2}\right)=\left(r_{1}+A\right)\left(r_{2}+A\right)=r_{1} r_{2}+A=\varphi\left(r_{1} r_{2}\right)$$

## MT3505 COURSE NOTES ：

For per haps a thousand years or more, one of the major research questions in mathematics was to solve the general cubic equation
$$x^{3}+b x^{2}+c x+d=0 .$$
In analogy to the quadratic case, one can obtain the simpler equation
$$y^{3}+p y+q=0,$$

# 微分方程|Differential Equations代写 MT3504

0

$$x^{2} \frac{\partial}{\partial x}+(y-x) \frac{\partial}{\partial y}$$
has vertical hyperbolic direction $\mathbb{C} \cdot(0,1)$ and the central direction $\mathbb{C} \cdot(1,1)$. The central manifold, if it exists, must be represented as the graph of the function $y=\varphi(x), \varphi(x)=x+\sum_{k \geqslant 2} c_{k} x^{k}$. However, this series diverges, as was noticed already by L. Euler. Indeed, the function $\varphi$ must be solution to the differential equation

$$\frac{d \varphi}{d x}=\frac{\varphi(x)-x}{x^{2}}$$
which implies the recurrent formulas for the coefficients,
$$k c_{k}=c_{k+1}, \quad k=1,2, \ldots, \quad c_{1}=1 .$$
The factorial series with $c_{k}=(k-1)$ ! has zero radius of convergence, hence no analytic central manifold exists.

## MT3504 COURSE NOTES ：

$$i\left(a^{\prime}, S^{\prime}, \mathcal{F}^{\prime}\right)=i(a, S, \mathcal{F})-1 \text {. }$$
Proof. The Pfaffian equation $\omega=0$ in suitable local coordinates takes the form
$$\frac{d y}{d x}=r(x) y+\cdots, \quad x \in(\mathbb{C}, 0),$$
where the dots denote meromorphic terms divisible by $y^{2}$ and $r(x)$ is a meromorphic function whose residue is $i(0, D, \mathcal{F})$.

Blowing up means introducing the new variable $z=y / x$ linearly depending on $y$. Changing the variable in the above equation (i.e., applying a meromorphic gauge transform in the terminology of Chapter III) yields after linearization on ${y=0}$ the differential equation
$$\frac{d z}{d x}=r^{\prime}(x) z, \quad r^{\prime}(x)=r(x)-\frac{1}{x} .$$