# 莫尔斯理论|Morse Theory代写 MATH 7371

0

$$U_{1}=\partial_{1} W \cap U=\partial_{1} W \backslash D(-v)$$
will be denoted by $(-v)^{\varkappa}$ and called the transport map associated to $(-v)$, so that
$$(-v)^{\varkappa}(x)=E(x,-v)=\gamma(x, \tau(x,-v) ;-v) .$$

The transport map is a diffeomorphism
$$(-v)^{\cdots}: U_{1} \stackrel{\approx}{\longrightarrow} U_{0}$$
of $U_{1}$ onto the open subset $U_{0}$ of $\partial_{0} W$ :
$$U_{0}=\partial_{0} W \cap U=\partial_{0} W \backslash D(v) .$$
The inverse diffeomorphism is the transport map corresponding to $v$ :
$$\vec{v}: U_{0} \stackrel{\approx}{\longrightarrow} U_{1} .$$

## MATH7371COURSE NOTES ：

Proof. Let $W^{\prime}=f^{-1}([c, d])$. The Morse function
$$f \mid W^{\prime}: W^{\prime} \rightarrow[c, d]$$
has no critical points and the domain of definition of the functions $\tau\left(\cdot,-v \mid W^{\prime}\right)$ and $E\left(\cdot,-v \mid W^{\prime}\right)$ is the whole of $W^{\prime}$. The deformation retraction
$$H: W_{1} \times[0,1] \rightarrow W_{1}$$
is defined as follows:
$$\begin{array}{ll} H(x, t)=\gamma\left(x, t \cdot \tau\left(x,-v \mid W^{\prime}\right) ;-v \mid W^{\prime}\right) & \text { for } \quad x \in W^{\prime}, \ H(x, t)=x & \text { for } \quad x \in W_{0} \end{array}$$
The same formula defines a deformation retraction of $U$ onto $W_{0}$.

# 金融衍生品|Financial Derivatives代写 MATH 4683

0

Suppose we write
$$X(t)=\left(r-\frac{\sigma^{2}}{2}\right) t+\sigma Z(t)$$
so that
$$X(t)=\ln \frac{S(t)}{S_{0}} \quad \text { or } \quad S(t)=S_{0} e^{X(t)}$$

Now, the respective partial derivatives of $S$ are
$$\frac{\partial S}{\partial t}=0, \quad \frac{\partial S}{\partial X}=S \quad \text { and } \quad \frac{\partial^{2} S}{\partial X^{2}}=S .$$
By the Ito lemma, we obtain
$$d S(t)=\left(r-\frac{\sigma^{2}}{2}+\frac{\sigma^{2}}{2}\right) S(t) d t+\sigma S(t) d Z(t)$$
or
$$\frac{d S(t)}{S(t)}=r d t+\sigma d Z(t), \text { with } S(0)=S_{0}$$

## MATH 4683COURSE NOTES ：

Suppose ${X(t), t \geq 0}$ is the standard Brownian process, its corresponding reflected Brownian process is defined by
$$Y(t)=|X(t)|, \quad t \geq 0 .$$
Show that $Y(t)$ is also Markovian and its mean and variance are, respectively,
$$E[Y]=\sqrt{\frac{2 t}{\pi}}$$
and
$$\operatorname{var}(Y)=\left(1-\frac{2}{\pi}\right) t$$

# 概率和风险|Probability and Risks代写 MATH 4681

0

$$X_{A(\ell)}^{2}=\frac{T_{\ell}^{2}}{\widehat{V}\left(T_{\ell}\right)}, \quad \ell=r, s$$
where
$$T_{\ell}=\sum_{j} \widehat{w}{\ell j}\left(p{1 j}-p_{2 j}\right), \quad \ell=r_{1} s$$

$$\widehat{V}\left(T_{\ell}\right)=\widehat{V}\left(T_{\ell} \mid H_{0}\right)=\sum_{j} \widehat{w}{\ell j}^{2} \widehat{V}{0 j} \quad \ell=r, s$$
and
$$\widehat{w}{\ell j}=\frac{1}{g{\ell}^{\prime}\left(p_{j}\right) \widehat{V}{0 j}}, \quad \ell=r{1} s .$$

## MATH 4681COURSE NOTES ：

$$V\left(\widehat{\mu}{\theta}\right)=\frac{\sum{j} \tau_{j}^{2}\left(\sigma_{\widehat{\theta}{j}}^{2}+\sigma{\theta}^{2}\right)}{\left(\sum_{j} \tau_{j}\right)^{2}}$$
Thus
$$V\left(\widehat{\mu}{\theta}\right) \sum{j} \tau_{j}=\frac{\sum_{j} \tau_{j}^{2}\left(\sigma_{\hat{\theta}{j}}^{2}+\sigma{\theta}^{2}\right)}{\sum_{j} \tau_{j}}$$
so that
$$E\left(X_{H, C}^{2}\right)=\sum_{j} \tau_{j}\left(\sigma_{\hat{\theta}{j}}^{2}+\sigma{\theta}^{2}\right)-\frac{\sum_{j} \tau_{j}^{2}\left(\sigma_{\widehat{\theta}{j}}^{2}+\sigma{\theta}^{2}\right)}{\sum_{j} \tau_{j}}$$
Noting that $\tau_{j}=\sigma_{\hat{\theta}{j}}^{-2}$, it is readily shown that $$E\left(X{H, C}^{2}\right)=(K-1)+\sigma_{\theta}^{2}\left[\sum_{j} \tau_{j}-\frac{\sum_{j} \tau_{j}^{2}}{\sum_{j} \tau_{j}}\right]$$

# 物理学中的数学和计算方法|Mathematical and Computational Methods for Physics代写 MATH 4606

0

The solution of the Neumann problem in a plan is sought in the form of the simple layer potential
$$u(A)=\int_{L} \rho(P) \ln \frac{1}{r} d l$$
and the following integral equation is obtained for the required density $\rho$ for the internal problem
$$\pi \rho(A)=f(A)-\int \rho(R) K^{*}(A ; P) d l$$

and for the external problem the sign of its right-hand side changes. Here
$$K^{*}(A ; P)=\frac{\cos (\mathbf{r}, \mathbf{n})}{r}$$
where $\mathbf{r}=\vec{A} P, \mathbf{n}$ is the fixed external normal to $L$ at point $A$.

## MATH 4606COURSE NOTES ：

To find the solution of the homogeneous equation $u_{n}=a^{2} \Delta u$ in the form of a steady sinuso idal regime with a given frequency, we obtain the Helmholtz equation
$$\Delta v+k^{2} v=0 .$$
At infinity, the solutions of this equation should satisfy the irradiation principle
$$v=O\left(r^{-1}\right), \quad \frac{\partial v}{\partial r}+i k v=o\left(r^{-1}\right)$$
and for the two-dimensional case
$$v=O\left(r^{-1 / 2}\right), \quad \frac{\partial v}{\partial r}+i k v=o\left(r^{-1 / 2}\right)$$
in this case, the uniqueness theorem is valid.

# 高级概率和统计学|Advanced Probability and Statistics代写 MATH 3181

0

In this random intercept and trend model, the Cholesky factor equals
$$\boldsymbol{T}=\left[\begin{array}{cc} \sigma_{v_{0}} & 0 \ \frac{\sigma_{v_{01}}}{\sigma_{v_{0}}} & \left(\sigma_{v_{1}}^{2}-\frac{\sigma_{v v_{1}}^{2}}{\sigma_{v_{0}}^{2}}\right)^{1 / 2} \end{array}\right]$$

thus, the relationship between the unstandardized and standardized random effects is
\begin{aligned} v_{0 i} &=\sigma_{\nu_{0}} \theta_{0 i} \ v_{1 i} &=\frac{\sigma_{v_{01}}}{\sigma_{v_{0}}} \theta_{0 i}+\left(\sigma_{v_{1}}^{2}-\frac{\sigma_{v_{01}}^{2}}{\sigma_{v_{0}}^{2}}\right)^{1 / 2} \theta_{1 i} . \end{aligned}
The results of this analysis are listed in Table 10.3. Comparing this model to the previous one via a likelihood ratio test to assess whether the random trends are significant (i.e., $H_{0}: \sigma_{v_{1}}^{2}=\sigma_{v_{p} v_{1}}=0$ ) yields $X_{2}^{2}=77.90, p<.001$. Clearly there is strong evidence of subject heterogeneity in the the time trends.

## MATH 3181COURSE NOTES ：

$$p_{i j c}=\frac{\exp \left(\mathrm{z}{i j c}\right)}{\sum{h=1}^{C} \exp \left(\mathrm{z}{i j h}\right)} \text { for } c=1,2, \ldots, C,$$ where now $$\mathrm{z}{i j c}=\boldsymbol{x}{i j}^{\prime} \boldsymbol{\Gamma} \boldsymbol{d}{c}+\left(\boldsymbol{z}{i j}^{\prime} \otimes \boldsymbol{\theta}{i}^{\prime}\right) \mathrm{J}{\mathrm{r}^{*}}^{\prime} \boldsymbol{\Lambda} \boldsymbol{d}{e}$$

# 实分析|Real Analysis代写 MATH 3150

0

Proof. Write $h=g \circ f .$ By 8.1.5, there exists a function $A: S \rightarrow \mathbb{R}$, continuous at $c$, such that
$$f(x)-f(c)=A(x)(x-c) \text { for all } x \in \mathbf{S}$$
Similarly, there is a function $B: T \rightarrow \mathbb{R}$, continuous at $f(c)$, such that

(2) $g(y)-g(f(c))=B(y)(y-f(c))$ for all $y \in \mathrm{T}$.
If $s \in \mathrm{S}$ then $f(x) \in \mathrm{T}$; putting $y=f(x)$ in (2), we have
\begin{aligned} g(f(x))-g(f(c)) &=B(f(x)) \cdot(f(x)-f(c)) \ &=B(f(x)) \cdot A(x)(x-c) \end{aligned}

## MATH 3150COURSE NOTES ：

$$\sigma={a<b}, \quad \tau \doteq{a<c<b}$$
Writing $M=\sup f$ as before, and
\begin{aligned} M^{\prime} &=\sup {f(x): a \leq x \leq c}, \ M^{\prime \prime} &=\sup {f(x): c \leq x \leq b}, \end{aligned}
we have
$$S(\sigma)=M(b-a) \quad \text { and } \quad S(\tau)=M^{\prime}(c-a)+M^{\prime \prime}(b-c)$$

# 概率与统计|Probability and Statistics代写 MATH 3081

0

$$A\left(B_{Q}\right)=\prod_{h=1}^{r} A\left(B_{q h}\right)$$
Using these points and weights, the response model becomes
$$z_{i j q}=x_{i j}^{\prime} \beta+z_{i j}^{\prime} T B_{q},$$

and so the conditional likelihood is
$$\ell\left(\boldsymbol{Y}{i} \mid \boldsymbol{B}{q}\right)=\prod_{j=1}^{n_{i}} \Psi\left(z_{i j q}\right)^{Y_{i j}}\left[1-\Psi\left(z_{i j q}\right)\right]^{1-Y_{i j}}$$

## MATH 3081COURSE NOTES ：

$$\frac{\partial \log L}{\partial \eta}=\sum_{i=1}^{N}\left[h\left(\boldsymbol{Y}{i}\right)\right]^{-1} \int{\boldsymbol{\theta}} \frac{\partial \ell_{i}}{\partial \eta} g(\boldsymbol{\theta}) d \boldsymbol{\theta}$$
where
$$\ell_{i}=\ell\left(\boldsymbol{Y}{i} \mid \boldsymbol{\theta}\right)=\prod{j=1}^{n_{i}} \prod_{c=1}^{C}\left(p_{i j c}\right)^{y_{i j c}}$$
and
$$p_{i j c}=P_{i j c}-P_{i j, c-1} .$$

# 交互式数学|Interactive Mathematics代写 MATH 1213

0

If we transform a line:
\begin{aligned} L(t) &=(1-t) P_{0}+t P_{1} \ \mathcal{T}(L(t)) &=\mathfrak{J}\left((1-t) P_{0}+t P_{1}\right) \ &=(1-t) \mathcal{J}\left(P_{0}\right)+t \mathfrak{J}\left(P_{1}\right) \end{aligned}

The result is clearly still a line (assuming $\mathcal{T}\left(P_{0}\right)$ and $\mathcal{T}\left(P_{1}\right)$ aren’t coincident). Similarly, if we transform a plane:
\begin{aligned} P(t) &=(1-s-t) P_{0}+s P_{1}+t P_{2} \ \mathcal{T}(P(t)) &=\mathfrak{J}\left((1-s-t) P_{0}+s P_{1}+t P_{2}\right) \ &=(1-s-t) \mathcal{T}\left(P_{0}\right)+s \mathcal{T}\left(P_{1}\right)+t \mathcal{T}\left(P_{2}\right) \end{aligned}

## MATH 1213COURSE NOTES ：

We know that the transformed vector will be $\mathbf{v}{\perp}-\mathbf{v}{|}$. Substituting equations and into this gives us
\begin{aligned} \mathcal{T}(\mathbf{v}) &=\mathbf{v}{\perp}-\mathbf{v}{|} \ &=\mathbf{v}-2 \mathbf{v}_{|} \ &=\mathbf{v}-2(\mathbf{v} \cdot \hat{\mathbf{n}}) \hat{\mathbf{n}} \end{aligned}
From Chapter 2, we know that we can perform the projection of $\mathbf{v}$ on $\hat{\mathbf{n}}$ by multiplying by the tensor product matrix $\hat{\mathbf{n}} \otimes \hat{\mathbf{n}}$, so this becomes
\begin{aligned} \mathcal{T}(\mathbf{v}) &=\mathbf{v}-2(\hat{\mathbf{n}} \otimes \hat{\mathbf{n}}) \mathbf{v} \ &=[\mathbf{I}-2(\hat{\mathbf{n}} \otimes \hat{\mathbf{n}})] \mathbf{v} \end{aligned}

# 预微积分|Precalculus代写 MATH 1120

0

$$a_{k+1}>a_{k}$$
so
$$a_{k+1}+6>a_{k}+6$$

and
$$\frac{1}{2}\left(a_{k+1}+6\right)>\frac{1}{2}\left(a_{k}+6\right)$$
Thus
$$a_{k+2}>a_{k+1}$$

## MATH 1120COURSE NOTES ：

Let $a$ and $b$ be positive numbers with $a>b$. Let $a_{1}$ be their arithmetic mean and $b_{1}$ their geometric mean:
$$a_{1}=\frac{a+b}{2} \quad b_{1}=\sqrt{a b}$$
Repeat this process so that, in general,
$$a_{n+1}=\frac{a_{n}+b_{n}}{2} \quad b_{n+1}=\sqrt{a_{n} b_{n}}$$
(a) Use mathematical induction to show that
$$a_{n}>a_{n+1}>b_{n+1}>b_{n}$$(c) Show that $\lim {n \rightarrow \infty} a{n}=\lim {n \rightarrow \infty} b{n}$. Gauss called the common value of these limits the arithmetic-geometric mean of the numbers $a$ and $b$.