# 数学 Maths 2 PHYS130001/PHYS238001

0

Let $b_{\alpha \beta \mid \sigma}:=\partial_{\sigma} b_{\alpha \beta}-\Gamma_{\alpha \sigma}^{\tau} b_{\tau \beta}-\Gamma_{\beta \sigma}^{\tau} b_{\alpha \tau}$ denote the first-order covariant derivatives of the curvature tensor, defined here by means of its covariant components. Show that these covariant derivatives satisfy the Codazzi-Mainardi identities
$$b_{\alpha \beta \mid \sigma}=b_{\alpha \sigma \mid \beta}$$
which are themselves equivalent to the relations (Thm. 2.8-1)
$$\partial_{\sigma} b_{\alpha \beta}-\partial_{\beta} b_{\alpha \sigma}+\Gamma_{\alpha \beta}^{\tau} b_{\tau \sigma}-\Gamma_{\alpha \sigma}^{\tau} b_{\tau \beta}=0$$
Hint: The proof is analogous to that given in for establishing the relations $\left.b_{\beta}^{\tau}\right|{\alpha}=\left.b{\alpha}^{\tau}\right|_{\beta}$.

## PPHYS130001/PHYS238001COURSE NOTES ：

$u_{i}^{\varepsilon}\left(x^{\varepsilon}\right)=u_{i}(\varepsilon)(x)$ for all $x^{\varepsilon}=\pi^{\varepsilon} x \in \bar{\Omega}^{\varepsilon}$,
where $\pi^{c}\left(x_{1}, x_{2}, x_{3}\right)=\left(x_{1}, x_{2}, \varepsilon x_{3}\right)$. We then assume that there exist constants $\lambda>0, \mu>0$ and functions $f^{i}$ independent of $\varepsilon$ such that
$$\begin{gathered} \lambda^{\varepsilon}=\lambda \text { and } \mu^{\varepsilon}=\mu, \ f^{i, \varepsilon}\left(x^{\varepsilon}\right)=\varepsilon^{p} f^{i}(x) \text { for all } x^{\varepsilon}=\pi^{e} x \in \Omega^{\varepsilon}, \end{gathered}$$

# 工程原理 Engineering Principles GENG0005W1-01

0

$$m=m_{0} \sqrt{1-\frac{v^{2}}{c^{2}}}$$
where
$m$ – mass of a moving body (also called the variable mass)
$m_{0}$ – rest mass of a body (velocity is zero)
$v-$ velocity of a moving body
$c$ – speed of light
The ratio, $v^{2} / c^{2}$ is usually denoted as $\beta^{2}$. Thus
$$m=m_{0} / \sqrt{1-\beta^{2}}$$
Similarly, the relativistic energy of a body moving with velocity $v$ is no more expressed as $m v^{2} / 2$, but
$$E=m_{0} c^{2} / \sqrt{1-\beta^{2}}$$

## GENG0005W1-01COURSE NOTES ：

$$E^{2}=(p c)^{2}+\left(m_{0} c^{2}\right)^{2}$$
By definition, momentum can be described as a function of the mass and velocity of a moving body:
$$p=m v=m_{0} v / \sqrt{1-\beta^{2}}$$
Squaring Eq. (3-6)
\begin{aligned} &E^{2}=\left(m c^{2}\right)^{2}=\left(m_{0} c^{2}\right)^{2} / 1-\frac{v^{2}}{c^{2}} \rightarrow m^{2} c^{4}\left(1-\frac{v^{2}}{c^{2}}\right)=m_{0}^{2} c^{4} \ &m^{2} c^{4}=E^{2}=m_{0}^{2} c^{4}+m^{2} c^{2} v^{2} \end{aligned}
where $p=m v$, thus giving
$$E^{2}=(p c)^{2}+\left(m_{0} c^{2}\right)^{2}$$
For a massless particle (like a photon) it follows that the total energy depends on its momentum and the speed of light: $E=p c$. This aspect will be discussed in greater detail in later sections.

# 机械科学 Mechanical Science GENG0003W1-01

0

If final pressure of the gas is $p_{3}$, for a constant volume process $3-1$,
$$p_{3}=\frac{T_{3}}{T_{1}} p_{1}=\frac{323}{423} \times 10=7.6 \mathrm{bar}$$
Let us find the mass of the gas $m$ and
$$m=\frac{p_{1} \forall \forall_{1}}{R T_{1}}=\frac{10 \times 10^{5} \times 0.336}{293 \times 423}=2.7 \mathrm{~kg}$$
Change in internal energy
$$d U=U_{3}-U_{1}=m C_{7}\left(T_{3}-T_{1}\right)=2.7 \times 0.703(323-423)=-189.8 \mathrm{~kJ}$$
The negative sign indicates that there is a decrease in internal energy.

## GENG0003W1-01COURSE NOTES ：

The path followed by the system, $p_{1} \forall_{1}^{\prime}=p_{2} \forall_{2}$.
So,
$$\left(\frac{\forall_{2}}{\forall_{1}}\right)^{\gamma}=\frac{p_{1}}{p_{2}}$$
and
$$\forall_{2}=\left(\frac{p_{1}}{p_{2}}\right)^{1 / \gamma} \forall_{1}=\left(\frac{500}{100}\right)^{1 / L 4} \times 0.2=0.6313 \mathrm{~m}^{3}$$
Hence work done,
$$W_{1-2}=\frac{p_{1} \forall_{1}-p_{2} \forall_{2}}{\gamma-1}=\frac{(500 \times 0.2-100 \times 0.6313) 10^{3}}{(1.4-1) \times 10^{3}}=92.175 \mathrm{~kJ}$$

# 数学 Mathematics B GENG0002W1-01

0

from which we see that $f+(-f)=0$. The additive inverse law follows. For the distributive laws note that for real numbers $a, b$ and continuous functions $f, g \in V$, we have that for all $0 \leq x \leq 1$,
$$a(f+g)(x)=a(f(x)+g(x))=a f(x)+a g(x)=(a f+a g)(x),$$
which proves the first distributive law. For the second distributive law, note that for all $0 \leq x \leq 1$,
$$((a+b) g)(x)=(a+b) g(x)=a g(x)+b g(x)=(a g+b g)(x),$$
and the second distributive law follows. For the scalar associative law, observe that for all $0 \leq x \leq 1$,
$$((a b) f)(x)=(a b) f(x)=a(b f(x))=(a(b f))(x),$$
so that $(a b) f=a(b f)$, as required. Finally, we see that
$$(1 f)(x)=1 f(x)=f(x),$$

## GENG0002W1-01COURSE NOTES ：

First let $f(x), g(x) \in V$ and let $c$ be a scalar. By definition of the set $V$ we have that $f(1 / 2)=0$ and $g(1 / 2)=0$. Add these equations together and we obtain
$$(f+g)(1 / 2)=f(1 / 2)+g(1 / 2)=0+0=0 .$$
It follows that $V$ is closed under addition with these operations. Furthermore, if we multiply the identity $f(1 / 2)=0$ by the real number $c$ we obtain that
$$(c f)(1 / 2)=c \cdot f(1 / 2)=c \cdot 0=0 .$$
It follows that $V$ is closed under scalar multiplication. Now certainly the zero function belongs to $V$, since this function has value 0 at any argument. Therefore, $V$ contains an additive identity element. Finally, we observe that the negative of a function $f(x) \in V$ is also an element of $V$, since
$$(-f)(1 / 2)=-1 \cdot f(1 / 2)=-1 \cdot 0=0 .$$

# 高级衍射几何学与拓扑学 5M:Adv Diff Geom & Topology MATHS5039_1 5M

0

Proof of (b) is easy:
$$d\left(y_{n}, x\right) \leq d\left(y_{n}, x_{n}\right)+d\left(x_{n}, x\right) \rightarrow 0 .$$
(c) is also easy. Look at
$$d\left(x_{n}, y_{n}\right) \leq d\left(x_{n}, x_{n}^{\prime}\right)+d\left(x_{n}^{\prime}, y_{n}^{\prime}\right)+d\left(y_{n}^{\prime}, y_{n}\right) .$$
Taking limits, we get $\lim {n} d\left(x{n}, y_{n}\right) \leq \lim {n} d\left(x{n}^{\prime}, y_{n}^{\prime}\right)$. Similar argument shows the other way inequality and hence the proof.

## MATHS5039_1 COURSE NOTES ：

For, if $\varepsilon>0$ is given, by the Cauchy nature of $\left(x_{n}\right)$, there exists $N \in \mathbb{N}$ such that $d\left(x_{m}, x_{n}\right)<\varepsilon$ for $m, n \geq N$. We have, for $n \geq N$,
$$d\left(\tilde{x}{n}, \xi\right):=\lim {m \rightarrow \infty} d\left(x_{n}, x_{m}\right)<\varepsilon .$$
It is clear that the map $\varphi: X \rightarrow \tilde{X}$ given by $\varphi(x)=\tilde{x}$ is an isometry. $d(\varphi(x), \varphi(y)):=\lim _{n} d(x, y)$, the limit of a constant sequence.
We claim that the image $\varphi(X)$ of $X$ under the map $\varphi: x \mapsto \tilde{x}$ is dense

# 数学 MATH MATHS2025_1/MATHS3016_1

0

Let $f$ be a function from a nonempty open subset $E$ of $\mathbb{R}$ to $\mathbb{R}$. The function $f$ is said to be differentiable at $c \in E$ if
$$\lim {x \rightarrow c} \frac{f(x)-f(c)}{x-c}$$ or, equivalently, $$\lim {h \rightarrow 0} \frac{f(c+h)-f(c)}{h}$$
exists. This limit (if it exists) is called the derivative of $f$ at $c$. If the derivative of $f$ exists at every $c \in E$, then $f$ is said to be differentiable on $E$ (or just differentiable). The derivative of $f$ as a function from $E$ to $\mathbb{R}$ is denoted by
$$f^{\prime} \text { or } \frac{d f}{d x}$$
Note that the limit in Eq. (7.1) is understood as the limit of the function
$$g(x)=\frac{f(x)-f(c)}{x-c}, \quad x \in E \backslash{c}$$

## MATHS2025_1/MATHS3016_1COURSE NOTES ：

From the definition of $f^{\prime}(c)$, it follows that for every $\varepsilon>0$, there exists $\delta>0$ such that $x \in E,|x-c|<\delta$, and $x \neq c$ imply
$$\left|\frac{f(x)-f(c)}{x-c}-f^{\prime}(c)\right|<\varepsilon .$$
Thus, for every $x \in E$ with $|x-c|<\delta$,
$$|f(x)-\varphi(x)| \leq \varepsilon|x-c|,$$
where $\varphi$ is the linear function defined by
$$\varphi(x)=f(c)+f^{\prime}(c)(x-c), \quad x \in \mathbb{R}$$

# 复杂分析的方法 Methods in Complex Analysis MATHS4076_1

0

To find the best probability assignment subject to the constraint of given mean, we use the MaxEnt method. I.e. maximse
$$S[p]=-k_{\mathrm{B}} \sum_{k \geq 0} p(k) \ln p(k)$$
subject to the constraint
$$\sum_{k \geq 0} k p(k)=\mu .$$
using the method of Lagrangian multipliers to take constraints into account.
To simplify notation we can assume $k_{\mathrm{B}}=1$; this amounts to redefining Lagrangian multipliers in units of $k_{\mathrm{B}}$ (can you see why?). Thus define
$$\mathcal{L}[p]=-\sum_{k \geq 0} p(k) \ln p(k)+\lambda_{0}\left(\sum_{k \geq 0} p(k)-1\right)+\lambda_{1}\left(\sum_{k \geq 0} k p(k)-\mu\right)$$

## MATHS4076_1COURSE NOTES ：

$$\mu=\sum_{k \geq 0} k p(k)=\left(1-\mathrm{e}^{\lambda_{1}}\right) \sum_{k=1}^{\infty} k \mathrm{e}^{\lambda_{1} k}=\frac{\mathrm{e}^{\lambda_{1}}}{1-\mathrm{e}^{\lambda_{1}}}$$
required by the constraint of the given mean. Hence
$$\mathrm{e}^{\lambda_{1}}=\frac{\mu}{1+\mu} \quad \Longrightarrow \quad p(k)=\frac{1}{1+\mu}\left(\frac{\mu}{1+\mu}\right)^{k}$$
1

# 随机过程 Stochastic Processes STATS4024_1 /STATS5026_1

0

from which it follows that, by monotonicity,
$$\forall n \in \mathbb{N}: \quad E\left[Y_{n+1} \mid X=x\right] \geq E\left[Y_{n} \mid X=x\right], \quad P_{X} \text {-a.s. }$$
Moreover,
$$\forall B \in \mathcal{B}: \quad \int_{[X \in B]} Y_{n} d P=\int_{B} E\left[Y_{n} \mid X=x\right] d P_{X}(x)$$
and
$$\forall B \in \mathcal{B}: \quad \int_{[X \in B]} Y d P \geq \int_{[X \in B]} Y_{n} d P$$
Thus
$$E[Y \mid X=x] \geq E\left[Y_{n} \mid X=x\right], \quad P_{X} \text {-a.s. }$$

## STATS4024_1 /STATS5026_1COURSE NOTES ：

Therefore,
$$P([Y \in A] \mid \cdot)=P([Y \in A]), \quad P_{X} \text {-a.s. }$$
and if $Y$ is a real-valued integrable random variable, then
$$E[Y \mid \cdot]=E[Y], \quad P_{X}-a . s .$$
Proof: Independence of $X$ and $Y$ is equivalent to
$$P([X \in B] \cap[Y \in A])=P([X \in B]) P([Y \in A]) \quad \forall A \in \mathcal{B}{1}, B \in \mathcal{B}$$ or \begin{aligned} \int{[X \in B]} I_{[Y \in A]}(\omega) P(d \omega) &=P([Y \in A]) \int I_{B}(x) d P_{X}(x) \ &=\int_{D} P([Y \in A]) d P_{X}(x) \end{aligned}

# 进一步的复杂分析 Further Complex Analysis MATHS5070_1 /MATHS4104_1

0

$$l_{\mathrm{p}}(\psi)=c-\frac{n}{2} \log (\mathrm{SSB}+\lambda \mathrm{SSW})+\frac{n-m}{2} \log \lambda,$$
where $\lambda=1+k \psi$. The maximum of $l_{\mathrm{p}}$ is given by
$$\hat{\lambda}=\left(1-\frac{1}{m}\right) \frac{\mathrm{MSB}}{\mathrm{MSW}},$$
where MSB $=\operatorname{SSB} /(m-1)$ and MSW $=\operatorname{SSW} /(n-m)$, or
$$## MATHS5070_1 /MATHS4104_1COURSE NOTES ：$$
\hat{F}{r}(x)=\frac{1}{m{r}} \sum_{u=1}^{m_{r}} 1_{\left(\hat{\alpha}{r}, u \leq x\right)} \stackrel{P}{\longrightarrow} F{r}(x), \quad x \in C\left(F_{r}\right),
$$where \hat{\alpha}{r, u} is the u th component of \hat{\alpha}{r}, 1 \leq r \leq s, and$$
\hat{F}{0}(x)=\frac{1}{n} \sum{u=1}^{n} 1_{\left(\tilde{\epsilon}{u} \leq x\right)} \stackrel{P}{\longrightarrow} F{0}(x), \quad x \in C\left(F_{0}\right),
$$# 线性混合模型 Linear Mixed Models STATS5054_1/STATS4045_1 0 这是一份GLA格拉斯哥大MATHS5077_1作业代写的成功案例 The methods discussed above for the full parameter vector now directly carry over to calculate local influences on the geometric surface defined by \mathrm{LD}{1}(\omega). We now partition \ddot{L} as$$ \ddot{L}=\left(\begin{array}{ll} \ddot{L}{11} & \ddot{L}{12} \ \ddot{L}{21} & \ddot{L}{22} \end{array}\right), $$according to the dimensions of \theta{1} and \theta_{2}. Cook (1986) has then shown that the local influence on the estimation of \theta_{1}, of perturbing the model in the direction of a normalized vector \boldsymbol{h}, is given by$$
C_{h}\left(\boldsymbol{\theta}{1}\right)=2\left|\boldsymbol{h}^{\prime} \Delta^{\prime}\left[\ddot{L}^{-1}-\left(\begin{array}{cc} 0 & 0 \ 0 & \ddot{L}{22}^{-1}
\end{array}\right)\right] \Delta \boldsymbol{h}\right|
$$Because all eigenvalues of the malrix$$
\left(\begin{array}{ll}
\ddot{L}{11} & \ddot{L}{12} \
\ddot{L}{21} & \ddot{L}{22}
\end{array}\right)\left(\begin{array}{cc}
0 & 0 \
0 & \ddot{L}{22}^{-1} \end{array}\right)=\left(\begin{array}{cc} 0 & \ddot{L}{12} \ddot{L}_{22}^{-1} \
0 & I
\end{array}\right)
$$## MATHS5077_1COURSE NOTES ： A measure of influence, proposed , is then$$
\rho_{i}=-\left(\hat{\boldsymbol{\theta}}-\hat{\boldsymbol{\theta}}{(i)}^{1}\right)^{\prime} \ddot{L}\left(\hat{\boldsymbol{\theta}}-\hat{\boldsymbol{\theta}}{(i)}^{1}\right)

\boldsymbol{\Delta}{i}=-\sum{j \neq i} \boldsymbol{\Delta}{j}=\ddot{L}{(i)}(\widehat{\boldsymbol{\theta}})\left(\hat{\boldsymbol{\theta}}{(i)}^{1}-\widehat{\boldsymbol{\theta}}\right) $$such that expression (11.4) becomes$$ C{i}=-2\left(\hat{\boldsymbol{\theta}}-\hat{\boldsymbol{\theta}}{(i)}^{1}\right)^{\prime} \ddot{L}{(i)} \ddot{L}^{-1} \ddot{L}{(i)}\left(\hat{\boldsymbol{\theta}}-\hat{\boldsymbol{\theta}}{(i)}^{1}\right)