# 复数分析/密码学和代码 Complex Analysis II/Cryptography and Codes MATH2011-WE01/MATH30120-WE01/MATH3401-WE01

0

Finally, we define algorithm $A^{\prime \prime}$. On input $y=f(x)$, the algorithm selects $j \in{1, \ldots, \ell}$ with probability $2^{-2 j+1}$ (and halts with no output otherwise). It invokes the preceding implementation of algorithm $A^{\prime}$ on input $y$ with parameter $\varepsilon \stackrel{\text { def }}{=} 2^{-j-1} / \ell$ and retums whatever $A^{\prime}$ does. The expected running time of $A^{\prime \prime}$ is
$$\sum_{j=1}^{\ell} 2^{-2 j+1} \cdot O\left(\frac{n^{2}}{\left(2^{-j-1} / \ell\right)^{2}}\right) \cdot\left(t_{G}(n)+\log \left(n \cdot 2^{j+1} \ell\right)\right)=O\left(n^{2} \cdot \ell^{3}\right) \cdot t_{G}(n)$$

(assuming $t_{G}(n)=\Omega(\ell \log n)$ ). Letting $i \leq \ell$ be an index satisfying Claim $2.5 .4 .1$ (and letting $S_{n}$ be the corresponding set), we consider the case in which $j$ (selected by $A^{\prime \prime}$ ) is greater than or equal to $i$. By Claim 2.5.4.2, in such a case, and for $x \in S_{n}$, algorithm $A^{\prime}$ inverts $f$ on $f(x)$ with probability at least $\frac{1}{2}$. Using $i \leq \ell$ $\left(=\log {2}(1 / \varepsilon(n))\right)$, we get \begin{aligned} \operatorname{Pr}\left[A^{\prime \prime}\left(f\left(U{n}\right)\right)=U_{n}\right] & \geq \operatorname{Pr}\left[U_{n} \in S_{n}\right] \cdot \operatorname{Pr}[j \geq i] \cdot \frac{1}{2} \ & \geq 2^{i-1} \varepsilon(n) \cdot 2^{-2 i+1} \cdot \frac{1}{2} \ & \geq \varepsilon(n) \cdot 2^{-\ell} \cdot \frac{1}{2}=\frac{\varepsilon(n)^{2}}{2} \end{aligned}
The proposition follows.

## MATH43220-WE01/MATH3341-WE01/MATH4031-WE01COURSE NOTES ：

Let $\left{G_{n}\right}_{n \in \mathbb{N}}$ be a family of $d$-regular graphs, so that $G_{n}$ has vertex set ${0,1}^{n}$ and self-loops at every vertex. Consider a labeling of the edges incident to each vertex (using the labels $1,2, \ldots, d$ ). Define $g_{l}(x)$ to be the vertex reachable from vertex $x$ by following the edge labeled $l$. Let $f:{0,1}^{} \rightarrow{0,1}^{}$ be a $1-1$ length-preserving function, and let $\lambda$ denote the empty sequence (over ${1,2, \ldots, d}$ ). Then for every $k \geq 0, x \in{0,1}^{n}$ and $\sigma_{1}, \sigma_{2}, \ldots, \sigma_{k} \in{1,2, \ldots, d}$, define $F(x, \lambda)=x$ and
$$F\left(x, \sigma_{1} \sigma_{2} \cdots \sigma_{k}\right)=\sigma_{1}, F\left(g_{\sigma_{1}}(f(x)), \sigma_{2}, \ldots, \sigma_{k}\right)$$
That is,
\begin{aligned} F\left(x, \sigma_{1} \sigma_{2} \cdots \sigma_{k}\right) &=\sigma_{1}, \sigma_{2}, \ldots, \sigma_{k}, y \ y &=g_{\sigma_{k}}\left(f\left(\cdots\left(g_{\sigma_{2}}\left(f\left(g_{\sigma_{1}}(f(x))\right)\right)\right) \cdots \cdot\right)\right) \end{aligned}
where
For every $k: \mathbb{N} \rightarrow \mathbb{N}$, define $F_{k}(\alpha) \stackrel{\text { def }}{=} F\left(x, \sigma_{1}, \ldots, \sigma_{t}\right)$, where $\alpha$ is parsed into $\left(x, \sigma_{1}, \ldots, \sigma_{t}\right)$, so that $t=k(|x|)$ and $\sigma_{i} \in{1,2, \ldots, d}$.

# 贝叶斯统计学 Bayesian Statistics MATH43220-WE01/MATH3341-WE01/MATH4031-WE01

0

Higher order difference priors may be used for seasonal effects. For example, for quarterly data, a possible smoothness prior is
$$h(t)=s(t) \quad s(t-1) \quad s(t-2) \quad s(t-3) \sim \mathrm{N}(0, \tau s)$$
For monthly data, the analogous scheme is
$$h(t)=s(t) \quad s(t-1) \quad s(t-2) \quad \ldots . \quad s(t-11) \sim \mathrm{N}(0, \tau s)$$

Instead of simple random walk priors, autoregressive priors involving lag coefficients $\phi_{1}, \ldots, \phi_{k}$ may be specified as smoothness priors. For example, an $\operatorname{AR}(2)$ prior in the true series would be
$$f(t) \sim \mathrm{N}\left(\phi_{1} f(t-1) \quad \phi_{2} f(t-2), \tau^{2}\right)$$
illustrate the use of such priors (with high order $k$ ) to estimate the spectral distribution of a stationary time series.

## MATH43220-WE01/MATH3341-WE01/MATH4031-WE01COURSE NOTES ：

The corresponding conditional ; Bernardinelli et al.,is
$$P\left(e_{i} \mid e_{j}, j \neq i\right) \sim \mathrm{N}\left(M_{i}, \sigma_{i}^{2}\right)$$
with
$$M_{i}=\sum_{j \neq i} c_{i j} e_{j} / \sum_{j \neq i} c_{i j}=\sum_{j \neq i} w_{i j} e_{j}$$
and $c_{i j}$ being spatial interactions as above. The variances differ by area with
$$\sigma_{i}^{2}=\kappa^{2} / \sum_{j \neq i} c_{i j}$$

# 多变量的分析 Analysis in Many Variables II MATH2031-WE01

0

Proof Fix $y \in V$ and choose indices $j$ and $k$ from ${1, \ldots, n}$. We need to show
$$\lim {h \rightarrow 0} \frac{g{j}\left(\boldsymbol{y}+h \boldsymbol{E}{\boldsymbol{k}}\right)-g{j}(\boldsymbol{y})}{h}$$

exists and that $g_{y_{j}}$ is continuous. For small enough h, the line segment $\left[y, y+h \boldsymbol{E}{k}\right]$ is in $V$. Let \begin{aligned} &x=g(\boldsymbol{y})=f^{-1}(\boldsymbol{y}) \in U \ &\boldsymbol{z}=g\left(\boldsymbol{y}+h \boldsymbol{E}{k}\right)=f^{-1}\left(\boldsymbol{y}+h \boldsymbol{E}{k}\right) \in U \end{aligned} Then, $\boldsymbol{x}$ and $z$ are distinct as $f^{-1}$ is $1-1$. Note the line segment $\left[x{,} z\right]$ is in $B\left(r_{,} x_{0}\right)$. Since $g$ is continuous on $V$, $\lim {h \rightarrow 0} g\left(\boldsymbol{y}+h \boldsymbol{E}{\boldsymbol{k}}\right)=g(\boldsymbol{y})$. But this says $\lim _{h \rightarrow 0} \boldsymbol{z}=g(\boldsymbol{y})=\boldsymbol{x}$

## MATH2031-WE01COURSE NOTES ：

$$0=g_{x}^{0}\left(x-x_{0}\right)+g_{y}^{0}\left(\phi(x)-\phi\left(x_{0}\right)\right)+E_{g}\left(x, \phi(x), x_{0}, \phi\left(x_{0}\right)\right)$$
Now divide through by $x-x_{0}$ to get
$$0=g_{x}^{0}+g_{y}^{0}\left(\frac{\phi(x)-\phi\left(x_{0}\right)}{x-x_{0}}\right)+\frac{E_{g}\left(x, \phi(x), x_{0}, \phi\left(x_{0}\right)\right)}{x-x_{0}}$$
Thus,
$$g_{y}^{0}\left(\frac{\phi(x)-\phi\left(x_{0}\right)}{x-x_{0}}\right)=-g_{x}^{0}-\frac{E_{g}\left(x, \phi(x), x_{0}, \phi\left(x_{0}\right)\right)}{x-x_{0}}$$
and assuming $g_{x}^{0} \neq 0$ and $g_{y}^{0} \neq 0$, we can solve to find
$$\frac{\phi(x)-\phi\left(x_{0}\right)}{x-x_{0}}=-\frac{g_{x}^{0}}{g_{y}^{0}}-\frac{1}{g_{y}^{0}} \frac{E_{g}\left(x, \phi(x), x_{0}, \phi\left(x_{0}\right)\right)}{x-x_{0}}$$

# 数学分析 Analysis MATH41220-WE01/MATH1051-WE01/MATH3011-WE01

0

Thus $\left(d\left(p_{n}, q_{n}\right)\right)$ is a Cauchy sequence in $\mathbb{R}$, and because $\mathrm{R}$ is complete,
$$L=\lim {n \rightarrow \infty} d\left(p{n}, q_{n}\right)$$
exists. Let $\left(p_{n}^{\prime}\right)$ and $\left(q_{n}^{\prime}\right)$ be sequences that are co-Cauchy with $\left(p_{n}\right)$ and $\left(q_{n}\right)$, and let
$$L^{\prime}=\lim {n \rightarrow \infty} d\left(p{n}^{\prime}, q_{n}^{\prime}\right) .$$

Then
$$\left|L-L^{\prime}\right| \leq\left|L-d\left(p_{n}, q_{n}\right)\right|+\left|d\left(p_{n}, q_{n}\right)-d\left(p_{n}^{\prime}, q_{n}^{\prime}\right)\right|+\left|d\left(p_{n}^{\prime}, q_{n}^{\prime}\right)-L^{\prime}\right| .$$
As $n \rightarrow \infty$, the first and third terms tend to 0 . the middle term is
$$\left|d\left(p_{n}, q_{n}\right)-d\left(p_{n}^{\prime}, q_{n}^{\prime}\right)\right| \leq d\left(p_{n}, p_{n}^{\prime}\right)+d\left(q_{n}, q_{n}^{\prime}\right) .$$

## MATH41220-WE01/MATH1051-WE01/MATH3011-WE01COURSE NOTES ：

$$|n|{p}=\frac{1}{p^{k}}$$ where $p^{k}$ is the largest power of $p$ that divides $n$. (The norm of 0 is by definition 0 .) The more factors of $p$, the smaller the $p$-norm. Similarly, if $x=a / b$ is a fraction, we factor $x$ as $$x=p^{k} \cdot \frac{r}{s}$$ where $p$ divides neither $r$ nor $s$, and we set $$|x|{p}=\frac{1}{p^{k}} .$$
The $p$-adic metric on $Q$ is
$$d_{p}(x, y)=|x-y|_{p} .$$

# 代数拓扑学 Algebraic Topology MATH41120-WE01/MATH4161-WE01

0

Proof: The $\alpha_{i}$ ‘s will be constructed inductively. Since the $F_{i}$ ‘s are free, it suffices to define each $\alpha_{i}$ on a basis for $F_{i}$. To define $\alpha_{0}$, observe that surjectivity of $f_{0}^{\prime}$ implies that for each basis element $x$ of $F_{0}$ there exists $x^{\prime} \in F_{0}^{\prime}$ such that $f_{0}^{\prime}\left(x^{\prime}\right)=\alpha f_{0}(x)$, so we define $\alpha_{0}(x)=x^{\prime}$. We would like to define $\alpha_{1}$ in the same way, sending a basis element $x \in F_{1}$ to an element $x^{\prime} \in F_{1}^{\prime}$ such that $f_{1}^{\prime}\left(x^{\prime}\right)=\alpha_{0} f_{1}(x)$. Such an $x^{\prime}$ will exist if $\alpha_{0} f_{1}(x)$ lies in $\operatorname{Im} f_{1}^{\prime}=\operatorname{Ker} f_{0}^{\prime}$, which it does since $f_{0}^{\prime} \alpha_{0} f_{1}=\alpha f_{0} f_{1}=0$. The same procedure defines all the subsequent $\alpha_{i}$ ‘s.

If we have another chain map extending $\alpha$ given by maps $\alpha_{i}^{\prime}: F_{i} \rightarrow F_{i}^{\prime}$, then the differences $\beta_{i}=\alpha_{i}-\alpha_{i}^{\prime}$ define a chain map extending the zero map $\beta: H \rightarrow H^{\prime}$. It will suffice to construct maps $\lambda_{i}: F_{i} \rightarrow F_{i+1}^{\prime}$ defining a chain homotopy from $\beta_{i}$ to 0 , that is, with $\beta_{i}=f_{i+1}^{\prime} \lambda_{i}+\lambda_{i-1} f_{i}^{\prime}$. The $\lambda_{i}$ ‘s are constructed inductively by a procedure much like the construction of the $\alpha_{i}^{\prime}$ ‘s. When $i=0$ we let $\lambda_{-1}: H \rightarrow F_{0}^{\prime}$ be zero, and then the desired relation becomes $\beta_{0}=f_{1}^{\prime} \lambda_{0}$. We can achieve this by letting $\lambda_{0}$ send a basis element $x$ to an element $x^{\prime} \in F_{1}^{\prime}$ such that $f_{1}^{\prime}\left(x^{\prime}\right)=\beta_{0}(x)$. Such an $x^{\prime}$ exists since $\operatorname{Im} f_{1}^{\prime}=\operatorname{Ker} f_{0}^{\prime}$ and $f_{0}^{\prime} \beta_{0}(x)=\beta f_{0}(x)=0$. For the inductive step we wish to define $\lambda_{i}$ to take a basis element $x \in F_{i}$ to an element $x^{\prime} \in F_{i+1}^{\prime}$ such that $f_{i+1}^{\prime}\left(x^{\prime}\right)=\beta_{i}(x)-\lambda_{i-1} f_{i}(x)$. This will be possible if $\beta_{i}(x)-\lambda_{i-1} f_{i}(x)$ lies in $\operatorname{Im} f_{i+1}^{\prime}=\operatorname{Ker} f_{i}^{\prime}$, which will hold if $f_{i}^{\prime}\left(\beta_{i}-\lambda_{i-1} f_{i}\right)=0$. Using the relation $f_{i}^{\prime} \beta_{i}=\beta_{i-1} f_{i}$ and the relation $\beta_{i-1}=f_{i}^{\prime} \lambda_{i-1}+\lambda_{i-2} f_{i-1}$ which holds by induction, we have
\begin{aligned} f_{i}^{\prime}\left(\beta_{i}-\lambda_{i-1} f_{i}\right) &=f_{i}^{\prime} \beta_{i}-f_{i}^{\prime} \lambda_{i-1} f_{i} \ &=\beta_{i-1} f_{i}-f_{i}^{\prime} \lambda_{i-1} f_{i}=\left(\beta_{i-1}-f_{i}^{\prime} \lambda_{i-1}\right) f_{i}=\lambda_{i-2} f_{i-1} f_{i}=0 \end{aligned}

## EMATH41120-WE01/MATH4161-WE01 COURSE NOTES ：

Now we return to topology. Given a space $X$ and an abelian group $G$, we define the group $C^{n}(X ; G)$ of singular $n$-cochains with coefficients in $G$ to be the dual group $\operatorname{Hom}\left(C_{n}(X), G\right)$ of the singular chain group $C_{n}(X)$. Thus an $n$-cochain $\varphi \in C^{n}(X ; G)$ assigns to each singular $n$-simplex $\sigma: \Delta^{n} \rightarrow X$ a value $\varphi(\sigma) \in G$. Since the singular $n$-simplices form a basis for $C_{n}(X)$, these values can be chosen arbitrarily, hence $n$-cochains are exactly equivalent to functions from singular $n$-simplices to $G$.

The coboundary map $\delta: C^{n}(X ; G) \rightarrow C^{n+1}(X ; G)$ is the dual $\partial^{*}$, so for a cochain $\varphi \in C^{n}(X ; G)$, its coboundary $\delta \varphi$ is the composition $C_{n+1}(X) \stackrel{\partial}{\longrightarrow} C_{n}(X) \stackrel{\varphi}{\longrightarrow} G$. This means that for a singular $(n+1)$-simplex $\sigma: \Delta^{n+1} \rightarrow X$ we have
$$\delta \Phi(\sigma)=\sum_{i}(-1)^{i} \Phi\left(\sigma \mid\left[v_{0}, \cdots, \hat{v}{i}, \cdots, v{n+1}\right]\right)$$

# 代数 Algebra II MATH2581-WE01

0

\begin{aligned}
\varphi_{L}(a) \vee \varphi_{L}(b)=\left(a^{\star \uparrow} \vee b^{\star \dagger}\right) \cap D(L) &=\left(a^{\star} \wedge b^{\star}\right)^{\dagger} \cap D(L) \
&=(a \vee b)^{\star \uparrow} \cap D(L) \
&=\varphi_{L}(a \vee b)
\end{aligned}

\begin{aligned} \varphi_{L}(a) \cap \varphi_{L}(b)=a^{\star \uparrow} \cap b^{\star \uparrow} \cap D(L) &=\left(a^{\star} \vee b^{\star}\right)^{\dagger} \cap D(L) \ &=(a \wedge b)^{\star \uparrow} \cap D(L) \ &=\varphi_{L}(a \wedge b) . \end{aligned}
Since also $\varphi_{L}(0)={1}$ and $\varphi_{L}(1)=D(L)$ it follows that $\varphi_{L}$ is a $(0,1)$-lattice morphism.
Now, by the distributivity, we have $x=x^{\star \star} \wedge\left(x \vee x^{\star}\right)$. It follows that
$$(\forall x \in L) \quad x^{\dagger}=x^{\star \star \uparrow} \vee\left(x^{\dagger} \cap D(L)\right) \text {. }$$
In fact, if $t \in x^{\dagger}$ then $t=t \vee x=\left(t \vee x^{\star \star}\right) \wedge\left(t \vee x \vee x^{\star}\right)$ where $x \vee x^{\star} \in D(L)$; and conversely if $t \in x^{\star \star \uparrow} \vee\left(x^{\dagger} \cap D(L)\right)$ then $t \geqslant y \wedge z$ where $y \geqslant x^{\star \star} \geqslant x$ and $z \geqslant x$ and thus $t \geqslant x$.

## MATH2581-WE01COURSE NOTES ：

Proof $\Rightarrow$ : If $L$ is a Heyting algebra follow from the above whereas (3) is immediate from the fact that $x:(x \wedge y)=1$.
$\Leftarrow$ : Conversely, if the identites hold then $x \wedge(y: x)=x \wedge y \leqslant y$; and if $x \wedge z \leqslant y$ then
$$z \wedge(y: x)=z \wedge[(z \wedge y):(z \wedge x)]=z \wedge[(y \wedge z):(x \wedge y \wedge z)]=z$$
and so $z \leqslant y: x$. Thus $(L ;:)$ is a Heyting algebra.

# 高级量子理论 Advanced Quantum Theory MATH41020-WE01/MATH4061-WE01

0

Velocity Transformation Matrix. In the language of transformation theory, postulate ii links time and space together by the four-vectors (note that this is not the convention used in most texts on general relativity)
$$x_{\mu}=\left(x_{0}, \mathbf{x}\right), \quad x^{\mu}=\left(x_{0},-\mathbf{x}\right),$$
where $x_{0}=t$ and $\mu=0,1,2,3$, such that the scalar product
$$x_{\mu} x^{\mu}=x_{0}^{2}-\mathbf{x}^{2} \equiv x \cdot x=x^{2}$$

is invariant from frame to frame,
$$x_{\mu}^{\prime} x^{\prime \mu}=x_{\mu} x^{\mu} .$$
Postulate $i$ then implies that there must exist a linear transformation connecting $x_{\mu}$ and $x_{\mu}^{\prime}$ obeying (3.3). For a passive Lorentz velocity transformation, $v_{x}=-v$, of an observer measuring an event at $x^{\prime}$ relative to an observer measuring the event at $x,(3.3)$ is satisfied provided (see Problem 3.1)
\begin{aligned} &t^{\prime}=\gamma(t+v x) \ &x^{\prime}=\gamma(x+v t), \quad y^{\prime}=y, \quad z^{\prime}=z \end{aligned}

## MATH41020-WE01/MATH4061-WE01COURSE NOTES ：

we find
\begin{aligned} \Lambda_{\mu}{ }^{v} &=(\cosh \zeta I){\mu}{ }^{v}+(\sinh \zeta I){\mu}{ }^{v} \ &=\delta_{\mu}{ }^{v}-\left(I^{2}\right){\mu}{ }^{v}+\left(I^{2}\right){\mu}{ }^{v} \cosh \zeta+I_{\mu}{ }^{v} \sinh \zeta . \end{aligned}
Since (3.14) means $1-I^{2}$ is nonvanishing and unity only in the $y y$ and $z z$ entries, it is clear that (3.16) is identical to the original form of the velocity transformation (3.6) provided
$$\gamma=\cosh \zeta, \quad \gamma v=\sinh \zeta,$$
which implies $v=\tanh \zeta$ or $\gamma=\left(1-v^{2}\right)^{-\pm}$(see Problem 3.1). One can generalize (3.13)-(3.16) to include velocity transformations in an arbitrary direction by replacing $\zeta I_{\mu v}$ with the sum of three terms, one corresponding to each of the three space directions. It is also possible to invert (3.17) as
$$\zeta=\tanh ^{-1} v=\frac{1}{2} \log \left(\frac{1+v}{1-v}\right),$$