# 经典场论 Classical Field Theory MATHS5054_1

0

We consider an applied field $E_{0}$ that is small compared to the internal fields of the atoms. that is,
$$E_{0} \sim \frac{e}{a_{B}^{2}} \sim \frac{26 \mathrm{Volts}}{a_{B}} \sim 5 \times 10^{\circ} \mathrm{Volts} / \mathrm{cm}$$
where $e$ is the electron’s charge and $a_{B}$ the Bohr radius:
$$a_{N}=\frac{\hbar^{2}}{m e^{2}} \sim \frac{1}{2} \times 10^{-8} \mathrm{~cm}$$
Here, $\hbar$ is Planck’s constant divided by $2 \pi$ and $m$ is the electron mass.

## MATHS5054_1 COURSE NOTES ：

$$Q_{u j}=\alpha_{Q}\left(\frac{\partial E_{1}}{\partial x_{j}}+\frac{\partial E_{i}}{\partial x_{j}}\right)$$
The quadrupole polarizability $\alpha_{O}$ in has the dimensionality $L^{S}$. so we expect $\alpha_{Q}$ for an atom to be $\sim a_{m}^{5}$. The field generated by the induced moment, in analogy with, will be
$$\delta E_{Q} \sim \alpha_{\circlearrowright} \frac{\partial E_{i 1}}{\partial x} \sum_{i} \frac{1}{\left|\mathbf{r}-\mathbf{r}{i}\right|^{+}}$$ or $$\delta E{\circlearrowright}-\frac{E_{11}}{R} \times \frac{N \alpha_{Q}}{R^{4}}$$

# 数学 MATHS 3R MATHS3021_1/MATHS2035_1

0

which is bounded above by $4 H K$ where
$$H=\limsup {k, \delta} \frac{\delta}{c(\delta)} \int{0}^{\infty} \psi_{\delta}(v) \sigma_{k \delta-v}^{2} \mathrm{~d} v$$
and
$$K=\frac{\delta}{c(\delta)} \sum_{k=1}^{n}\left(\int_{0}^{\infty} \chi_{\delta}(v) a_{k \delta-v} \mathrm{~d} v\right)^{2}$$
Here
$$\int_{0}^{\infty} \psi_{\delta}(v) \sigma_{k \delta-v}^{2} \mathrm{~d} v \leq C c(\delta)$$
where the constant $C$ depends on $t$ and $\sigma$. Hence $H \rightarrow 0$. Furthermore,
\begin{aligned} \sum_{k=1}^{n}\left(\int_{0}^{\infty} \chi_{\delta}(v) a_{k \delta-v} \mathrm{~d} v\right)^{2} & \leq C \sum_{k=1}^{n}\left(\int_{0}^{\infty}\left|\chi_{\delta}(v)\right| \mathrm{d} v\right)^{2} \ &=C \delta^{-1}\left(\int_{0}^{\infty}\left|\chi_{\delta}(v)\right| \mathrm{d} v\right)^{2} \end{aligned}
where $C$, again, depends on $t$ and $a$. Hence Condition A implies $K \rightarrow 0$.

## MATHS3021_1/MATHS2035_1 COURSE NOTES ：

Here we find
$$c_{1}(\delta)=\frac{1}{2 \lambda}\left(1-e^{-2 \lambda \delta}\right) \sim \delta$$
while for $k=2, \ldots, n$
$$c_{k}(\delta)=\frac{1}{2 \lambda}\left(e^{\lambda \delta}-1\right)^{3} e^{-2 k \lambda} \sim \frac{\lambda^{2}}{2} e^{-2 k \lambda} \delta^{3} .$$
Moreover we have
$$c_{n+1}(\delta)=\frac{1}{2 \lambda}\left(1-e^{-2 \lambda \delta}\right) \sim e^{-2 \lambda l} \delta,$$
whereas $c_{k}(\delta)=0$ for $k>n+1$. Finally, $c(\delta) \sim \delta\left(1+e^{-2 \lambda l}\right)$ and
$$\bar{c}{n+1}(\delta) c(\delta)^{-1} \int{l}^{l+\delta} g^{2}(v-\delta) \mathrm{d} v \rightarrow\left(1+e^{2 \lambda l}\right)^{-1} .$$
So, Conditions A, C and D are met. But Condition B is not and we have that $\pi_{\delta} \rightarrow \pi$, where
$$\pi=\frac{1}{1+e^{-2 \lambda l}} \delta_{0}+\frac{1}{1+e^{2 \lambda l}} \delta_{1},$$

# 统计遗传学 Statistical Genetics STATS4074_1/STATS5011_1

0

\begin{aligned} &d_{i, i}=0 \ &d_{i, j}=d_{j, i}>0 \text { for } i \neq j ; \text { and } \ &d_{i, j} \leq d_{i, k}+d_{j, k} \text { (this being the triangle inequality). } \end{aligned}
The first two are straightforward, each thing is identical to itself $\left(d_{i, i}=0\right)$; and the difference between $\mathrm{A}$ and $\mathrm{B}$ must be the same as between $\mathrm{B}$ and $\mathrm{A}$ (thus $d_{i, j}=d_{j, i}$ ); and the value must be positive when they differ. The third part of the definition, the triangle inequality is also a simple concept; the direct distance between London and Sydney cannot

## STATS4074_1/STATS5011_1 COURSE NOTES ：

From regression theory, the vector of partial regression coefficients for predicting the value of $y$ given a vector of observations $\mathbf{z}$ is $\mathbf{P}^{-1} \sigma(\mathbf{z}, y)$, where $\mathbf{P}$ is the covariance matrix of $\mathbf{z}$, and $\sigma(\mathbf{z}, y)$ is the vector of covariances between the elements of $\mathbf{z}$ and the variable $y$. Since $\boldsymbol{S}=\sigma(\mathbf{z}, \omega)$, it immediately follows that
$$\mathbf{P}^{-1} \sigma(\mathbf{z}, \omega)=\mathbf{P}^{-1} \mathbf{S}=\boldsymbol{\beta}$$
is the vector of partial regression for the best linear regression of relative fitness $\omega$ on phenotypic value $\mathbf{z}$, viz.,
$$\omega(\mathbf{z})=1+\sum_{j=1}^{n} \beta_{j}\left(z_{j}-\mu_{j}\right)=1+\beta^{\mathrm{T}}(\mathbf{z}-\boldsymbol{\mu}) .$$

# 代数 Algebra MATHS4072_1

0

$$\rho\left(J_{p}\right)<\frac{p-1-s}{p-2}$$
then the regions of convergence of the SOR method $\left(\rho\left(\mathscr{L}{\omega}\right)<1\right)$ are For $s=1, \quad \omega \in\left(0, \frac{p}{p-1}\right)$ and for $s=-1, \omega \in\left(\frac{p-2}{p-1}, \frac{2}{1+\rho\left(J{p}\right)}\right)$.

## MATHS4072_1 COURSE NOTES ：

\begin{aligned} &\dot{x}^{k}(t)=G\left(t, x^{k}(t), x^{k-1}(t)\right), t \in[0, T], \ &x^{k}(0)=x_{0} \end{aligned}
for $k=1,2, \ldots .$ Here, the function $x^{k-1}$ is known and $x^{k}$ is to be determined.
Note that the familiar Picard iteration
\begin{aligned} &\dot{x}^{k}(t)=F\left(t, x^{k-1}(t)\right), t \in[0, T], \ &x^{k}(0)=x_{0} \end{aligned}

# 概率与统计学原理 Principles of Prob & Stats (M) STATS5022_1/STATS4047_1

0

Here the population mean and the standard deviation are $\mu=110$ and $\sigma=10$, respectively. The sample size $n=75$ is large, so the central limit theorem ensures that the distribution of $\bar{X}$ is approximately normal with
Mean of $\bar{X}=110$
Standard deviation of $\bar{X}=\frac{\sigma}{\sqrt{n}}=\frac{10}{\sqrt{75}}=1.155$
To find $P[109<\bar{X}<112]$ we convert to the standardized variable
$$Z=\frac{\bar{X}-110}{1.155}$$
and calculate the z-values
$$\frac{109-110}{1.155}=-.866, \quad \frac{112-110}{1.155}=1.732$$

## STATS5022_1/STATS4047_1 COURSE NOTES ：

$$\bar{x}=\ 227 \quad \text { and } \quad s=\ 15$$
With $1-\alpha=.90$ we have $\alpha / 2=.05$, and $z_{\alpha / 2}=1.645$
$$1.645 \frac{s}{\sqrt{n}}=\frac{1.645 \times 15}{\sqrt{75}}=2.85$$
Hence, a $90 \%$ confidence interval for the population mean $\mu$ is

# 数学 Mathematics 1 MATHS1017_1

0

$2 x+3$ and $x-1$ are factors of $2 x^{4}+a x^{3}-3 x^{2}+b x+3$.
Find $a$ and $b$ and all zeros of the polynomial.
Since $2 x+3$ and $x-1$ are factors,
$2 x^{4}+a x^{3}-3 x^{2}+b x+3=(2 x+3)(x-1)($ a quadratic) $=\left(2 x^{2}+x-3\right)\left(x^{2}+c x-1\right)$ for some $c$
Equating coefficients of $x^{2}$ gives: $\quad-3=-2+c-3$
$\therefore c=2$

Equating coefficients of $x^{3}: \quad a=2 c+1=4+1=5$
Equating coefficients of $x$ :
\begin{aligned} b &=-1-3 c \ \therefore \quad b &=-1-6=-7 \end{aligned}
$\therefore \quad P(x)=(2 x+3)(x-1)\left(x^{2}+2 x-1\right)$ which has zeros of: $\quad-\frac{3}{2}, 1$ and $\frac{-2 \pm \sqrt{4-4(1)(-1)}}{2}=\frac{-2 \pm 2 \sqrt{2}}{2}=-1 \pm \sqrt{2}$
$\therefore$ the zeros are $-\frac{3}{2}, 1$ and $-1 \pm \sqrt{2}$.

## MATHS1017_1 COURSE NOTES ：

Find $k$ given that $x-2$ is a factor of $x^{3}+k x^{2}-3 x+6$. Hence, fully factorise $x^{3}+k x^{2}-3 x+6$.
Let $P(x)=x^{3}+k x^{2}-3 x+6$
By the Factor theorem, as $x-2$ is a factor then $P(2)=0$
\begin{aligned} \therefore \quad(2)^{3}+k(2)^{2}-3(2)+6 &=0 \ \therefore 8+4 k &=0 \quad \text { and so } k=-2 \end{aligned}
Now $x^{3}-2 x^{2}-3 x+6=(x-2)\left(x^{2}+a x-3\right)$ for some constant $a$.
Equating coefficients of $x^{2}$ gives: $\quad-2=-2+a \quad$ i.e., $a=0$
Equating coefficients of $x$ gives: $\quad-3=-2 a-3 \quad$ i.e., $\quad a=0$
\begin{aligned} \therefore x^{3}-2 x^{2}-3 x+6 &=(x-2)\left(x^{2}-3\right) \ &=(x-2)(x+\sqrt{3})(x-\sqrt{3}) \end{aligned}
$$\therefore P(2)=4 k+8 \quad \text { and since } \quad P(2)=0, \quad k=-2$$
Now $P(x)=(x-2)\left(x^{2}+[k+2] x+[2 k+1]\right)$
\begin{aligned} &=(x-2)\left(x^{2}-3\right) \ &=(x-2)(x+\sqrt{3})(x-\sqrt{3}) \end{aligned}

# 多变量方法（M级) Multivariate Methods (Level M) STATS5021_1

0

If every $y$ in the population is multiplied by a constant $a$, the expected value is also multiplied by $a$ :
$$E(a y)=a E(y)=a \mu .$$
The sample mean has a similar property. If $z_{i}=a y_{i}$ for $i=1,2, \ldots, n$, then
$$\bar{z}=a \bar{y}$$

The variance of the population is defined as $\operatorname{var}(y)=\sigma^{2}=E(y-\mu)^{2}$. This is the average squared deviation from the mean and is thus an indication of the extent to which the values of $y$ are spread or scattered. It can be shown that $\sigma^{2}=E\left(y^{2}\right)-\mu^{2}$.
The sample variance is defined as
$$s^{2}=\frac{\sum_{i=1}^{n}\left(y_{i}-\bar{y}\right)^{2}}{n-1}$$
which can be shown to be equal to
$$s^{2}=\frac{\sum_{i=1}^{n} y_{i}^{2}-n \bar{y}^{2}}{n-1} .$$

## STATS5021_1 COURSE NOTES ：

\begin{aligned} z_{i} &=a_{1} y_{i 1}+a_{2} y_{i 2}+\cdots+a_{p} y_{i p} \ &=\mathbf{a}^{\prime} \mathbf{y}{i}, \quad i=1,2, \ldots, n \end{aligned} The sample mean of $z$ can be found either by averaging the $n$ values $z{1}=\mathbf{a}^{\prime} \mathbf{y}{1}, z{2}=$ $\mathbf{a}^{\prime} \mathbf{y}{2}, \ldots, z{n}=\mathbf{a}^{\prime} \mathbf{y}{n}$ or as a linear combination of $\overline{\mathbf{y}}$, the sample mean vector of $\mathbf{y}{1}$, $\mathbf{y}{2}, \ldots, \mathbf{y}{n}$ :
$$\bar{z}=\frac{1}{n} \sum_{i=1}^{n} z_{i}=\mathbf{a}^{\prime} \bar{y}$$

# 数学与几何 拓扑学 Alg & Geom Topology MATHS5065_1 5/MATHS4112_1

0

Let $\mu$ be a mass distribution on $F$ and suppose that for some s there are numbers $c>0$ and $\varepsilon>0$ such that
$$\mu(U) \leqslant c|U|^{s}$$
for all sets $U$ with $|U| \leqslant \varepsilon$. Then $\mathcal{H}^{s}(F) \geqslant \mu(F) / c$ and
$$s \leqslant \operatorname{dim}{\mathrm{H}} F \leqslant \operatorname{dim}{\mathrm{B}} F \leqslant \overline{\operatorname{dim}}_{\mathrm{B}} F .$$

If $\left{U_{i}\right}$ is any cover of $F$ then
$$0<\mu(F) \leqslant \mu\left(\bigcup_{i} U_{i}\right) \leqslant \sum_{i} \mu\left(U_{i}\right) \leqslant c \sum_{i}\left|U_{i}\right|^{s}$$
using properties of a measure and (4.1).
Taking infima, $\mathcal{H}{8}^{s}(F) \geqslant \mu(F) / c$ if $\delta$ is small enough, so $\mathcal{H}^{s}(F) \geqslant \mu(F) / c$. Since $\mu(F)>0$ we get $\operatorname{dim}{H} F \geqslant s$.

Notice that the conclusion $\mathcal{H}^{s}(F) \geqslant \mu(F) / c$ remains true if $\mu$ is a mass distribution on $\mathbb{R}^{n}$ and $F$ is any subset.

## MATHS5065_1 5/MATHS4112_1 COURSE NOTES ：

Each $k$ th level interval supports mass $\left(m_{1} \cdots m_{k}\right)^{-1}$ so that

for every $0 \leqslant s \leqslant 1$.
Hence
$$\frac{\mu(U)}{|U|^{s}} \leqslant \frac{2^{s}}{\left(m_{1} \cdots m_{k-1}\right) m_{k}^{s} \varepsilon_{k}^{s}}$$
If
$$s<\lim {k \rightarrow \infty} \log \left(m{1} \cdots m_{k-1}\right) /-\log \left(m_{k} \varepsilon_{k}\right)$$

# 伽罗瓦理论 Galois Theory MATHS4105_1 /MATHS5071_1

0

Now let $\beta \in \mathbf{F}(\alpha)$. If $\beta \in \mathbf{F}$, there is nothing to prove (as then $m_{\beta}(X)=$ $X-\beta$ ) so assume $\beta \notin \mathbf{F}$. Then $\beta=\sum_{i=0}^{m-1} b_{i} \alpha^{i}$ with $b_{i} \in \mathbf{F}$ (and $m=p^{r}$ ), so
$$\beta^{p^{\prime}}=\left(\sum_{i=0}^{m-1} b_{i} \alpha^{i}\right)^{p^{\prime}}=\sum_{i=0}^{m-1} b_{i}^{p^{r}}\left(\alpha^{p^{\prime}}\right)^{i}=\sum_{i=0}^{m-1} b_{i}^{p^{\prime}} a^{i} \in \mathbf{F}$$

so $\mathbf{F}=\mathbf{F}\left(\beta^{p^{\prime}}\right)$ and hence $\mathbf{F}\left(\beta^{p^{r}}\right) \subset \mathbf{F}(\beta)$. Then, by Lemma 5.1.4, $\beta$ is inseparable over $\mathbf{F}$. (If $\beta$ were separable over $\mathbf{F}$, we would have $\mathbf{F}(\beta)=\mathbf{F}\left(\beta^{p}\right)=$ $\mathbf{F}\left(\left(\beta^{p}\right)^{p}\right)=\cdots=\mathbf{F}\left(\beta^{p^{r}}\right)$, a contradiction.)

## MATHS4105_1 /MATHS5071_1 COURSE NOTES ：

This is an application of Zorn’s Lemma. Let
$$\mathcal{S}={(\mathbf{D}, \tau)|\mathbf{B} \subseteq \mathbf{D}, \tau: \mathbf{D} \rightarrow \mathbf{D}, \tau| \mathbf{B}=\sigma}$$
Order $\mathcal{S}$ by $\left(\mathbf{D}{1}, \tau{1}\right) \leq\left(\mathbf{D}{2}, \tau{2}\right)$ if $\mathbf{D}{1} \subseteq \mathbf{D}{2}$ and $\tau_{2} \mid \mathbf{D}{1}=\tau{1}$. Then every chain $\left{\left(\mathbf{D}{i}, \tau{i}\right)\right}$ in $\mathcal{S}$ has a maximal element $(\mathbf{D}, \tau)$ given by
$$\mathbf{D}=\bigcup \mathbf{D}{i}, \quad \tau(d)=\tau{i}(d) \text { if } d \in \mathbf{D}_{i}$$

# 流体力学 4H: Fluid Mechanics MATHS4102_1

0

at least in a neighborhood of the static state $(\bar{\varrho}, \bar{\vartheta})$, we conclude, in agreement with the formal asymptotic expansion discussed that the quantities
$$\varrho_{0, \varepsilon}^{(1)}=\frac{\rho(0, \cdot)-\bar{\varrho}}{\varepsilon} \text { and } \vartheta_{0, \varepsilon}^{(1)}=\frac{\vartheta(0, \cdot)-\bar{\vartheta}}{\varepsilon}, \text { and } \mathbf{u}_{0, \varepsilon}=\mathbf{u}(0, \cdot)$$
have to be bounded uniformly for $\varepsilon \rightarrow 0$, or, in the terminology introduced the initial data must be at least ill-prepared.

As a direct consequence of the structural properties of $H_{\bar{v}}$ established it is not difficult to deduce from
$$\varrho^{(1)}(t, \cdot)=\frac{\varrho(t, \cdot)-\bar{g}}{\varepsilon} \text { and } \vartheta^{(1)}=\frac{\vartheta(t, \cdot)-\bar{\vartheta}}{\varepsilon}$$
remain bounded, at least in $L^{1}(\Omega)$, uniformly for $t \in[0, T]$ and $\varepsilon \rightarrow 0$.

## MATHS4102_1 COURSE NOTES ：

$$\vartheta \mapsto H_{\bar{\vartheta}}(\varrho, \vartheta)-(\varrho-\bar{\varrho}) \frac{\partial H_{\bar{\vartheta}}(\bar{\varrho}, \bar{\vartheta})}{\partial \varrho}-H_{\bar{\vartheta}}(\bar{\varrho}, \bar{\vartheta})$$
is decreasing for $\vartheta<\bar{\vartheta}$ and increasing whenever $\vartheta>\bar{\vartheta}$; whence $(5.39)$ follows. Finally, as $\mathcal{F}$ is strictly convex, we have
$$H_{\vec{\vartheta}}(\varrho, \vartheta)-(\varrho-\bar{\varrho}) \frac{\partial H_{\bar{v}}(\bar{\varrho}, \bar{\vartheta})}{\partial \varrho}-H_{\bar{\vartheta}}(\bar{\varrho}, \bar{\vartheta}) \geq c(\bar{\varrho}, \bar{\vartheta}) \varrho \text { whenever } \varrho \geq 2 \bar{\varrho}$$