# 概率和马丁格尔理论|STAT4528 Probability and Martingale Theory代写 Sydney代写

0

We say that a trading strategy $\phi=\left(\phi^{1}, \phi^{2}\right)$ over the time interval $[0, T]$ is self-financing if its wealth process $V(\phi)$, which is set to equal
$$V_{t}(\phi)=\phi_{t}^{1} S_{t}+\phi_{t}^{2} B_{t}, \quad \forall t \in[0, T],$$
satisfies the following condition
$$V_{l}(\phi)=V_{0}(\phi)+\int_{0}^{t} \phi_{u}^{1} d S_{u}+\int_{0}^{t} \phi_{u}^{2} d B_{u}, \quad \forall t \in[0, T]$$

where the first integral is understood in the Itô sense and the second it the pathwise Riemann (or Lebesgue) integral.

It is, of course, implicitly assumed that both integrals on the right-hand side are well-defined. It is well known that a sufficient condition for this is that ${ }^{1}$
$$\mathbb{P}\left{\int_{0}^{T}\left(\phi_{u}^{1}\right)^{2} d u<\infty\right}=1 \text { and } \mathbb{P}\left{\int_{0}^{T}\left|\phi_{u}^{2}\right| d u<\infty\right}=1$$
We denote by $\Phi$ the class of all self-financing trading strategies. It follows from the example below that arbitrage opportunities are not excluded a priori from the class of self-financing trading strategies.

## STAT4528 COURSE NOTES ：

$$c(s, t)=s N\left(d_{1}(s, t)\right)-K e^{-r t} N\left(d_{2}(s, t)\right)$$
where
$$d_{1}(s, t)=\frac{\ln (s / K)+\left(r+\frac{1}{2} \sigma^{2}\right) t}{\sigma \sqrt{t}}$$
and $d_{2}(s, t)=d_{1}(s, t)-\sigma \sqrt{t}$, or explicitly
$$d_{2}(s, t)=\frac{\ln (s / K)+\left(r-\frac{1}{2} \sigma^{2}\right) t}{\sigma \sqrt{t}} .$$
Furthermore, let $N$ stand for the standard Gaussian cumulative distribution function
$$N(x)=\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^{x} e^{-z^{2} / 2} d z, \quad \forall x \in \mathbb{R} .$$
We adopt the following notational convention
$$d_{1,2}(s, t)=\frac{\ln (s / K)+\left(r \pm \frac{1}{2} \sigma^{2}\right) t}{\sigma \sqrt{t}} .$$
Let us denote by $C_{t}$ the arbitrage price at time $t$ of a European call option in the Black-Scholes model. We are in a position to state the main result.

# 概率与数理统计|STAT4028 Probability and Mathematical Statistics代写 Sydney代写

0

Differentiating $G$ with respect to $c$ and setting this derivative equal to zero gives
\begin{aligned} \frac{d G(c)}{d c} &=[1-F(c)]-c f(c)=0 \ \text { i.e., } \quad \frac{f(c)}{1-F(c)} &=\frac{1}{c} \end{aligned}

b. The left-hand side of the last equation is the hazard function $h$ of $\mathbf{U}$; cf. page 2. The equation states that the optimal strategy $c$ is the abscissa at which the hazard function $h(c)$ crosses the function $1 / c$.
c. For an exponential rv, the hazard function is constant, $h(c)=\lambda$. Therefore, the optimal strategy is $c=1 / \lambda$, i.e., to choose the expected value of $\mathbf{U}$, With this strategy, the expected gain is
$$G\left(\frac{1}{\lambda}\right)=\frac{1}{\lambda} \cdot e^{-1}$$
i.e., about $36.8 \%$ of the expected value of $\mathbf{U}$. This value appears remarkably small, but due to the large spread (relative to its mean) of the exponential distribution, there is no way to further improve it within the given setting.

## STAT4028 COURSE NOTES ：

With the strategy $c$, the gain (actually, loss) will be equal to $-c$ with probability $\mathrm{P}(\mathbf{U} \leq c)=F(c)$, and it will be equal to $+c$ with probability $\mathrm{P}(\mathbf{U}>c)=1-F(c)$. Weighing these two cases by their respective probabilities, we get
\begin{aligned} G(c) &=-c \cdot F(c)+c \cdot[1-F(c)] \ &=c \cdot[1-2 F(c)] \end{aligned}
Figure $3.6$ shows the qualitative behavior of $G$.
Differentiating $G$ with respect to $c$ and setting this derivative equal to zero yields the optimum condition
$$\frac{d G(c)}{d c}=1-2[c f(c)+F(c)]=0$$

# 高级统计建模|STAT4027 Advanced Statistical Modelling代写 Sydney代写

0

In Bayesian estimation, integral expressions such as
$$f_{\pi}(x)=\int_{\Theta} f(x \mid \theta) \pi(\theta) d \theta$$
or the mean cost function
$$\int_{E_{1} \Theta} C(\theta, T(x)) f(x \mid \theta) \pi(\theta) d x d \theta$$
cannot generally be computed analytically. In order to solve this kind of problem, we consider the general expression
$$I=\int_{E} h(x) f(x) d x$$

where $f$ is a Probability Density Function . We can compute a numerical approximation of $I$ by simulating $n$ realisations of independent random variables $X_{1}, \ldots, X_{n}$, with the same $f(x)$. We then consider the estimator of $I$ defined by
$$\hat{I}=\frac{1}{n} \sum_{k=1, n} h\left(X_{k}\right)$$
This is an unbiased estimator, and it is clear that its variance is given by
$$\operatorname{var}[\bar{I}]=\frac{1}{n}\left(\int_{E} h^{2}(x) f(x) d x-\left(\int_{E} h(x) f(x) d x\right)^{2}\right)$$

## STAT4027 COURSE NOTES ：

$$\operatorname{var}[\mathbb{E} h(X) \mid Z]] \leq \operatorname{var}[h(X)]$$
which suggests that estimators of $\int_{E} h(x) f(x) d x$ of the form
could have a weaker variance than
$$\frac{1}{n} \sum_{k=1, n} h\left(X_{k}\right) \text {. }$$
This is evident in the particular case where the random variables $X_{k}$ are uncorrelated. The conditions under which using this technique, called $R a o$ Blackwellisation, is justified, can be found.

# 统计顾问|STAT4026 Statistical Consulting代写 Sydney代写

0

Once we obtain the estimates of ED50s we calculate contidence intervals for them. For this purpose we apply Fieller’s theorem which enables us to obtain confidence intervals of ratios of Gaussian random variables. Let $\gamma=z_{a} / 2 V_{11} / \beta^{2}$; then the $100(1-\alpha) \%$ confidence interval for $E D 50$ is
$$\widehat{E D 50}+\frac{\gamma}{1-\gamma}\left(\widehat{E D 50}+\frac{V_{10}}{V_{11}}\right) \pm \frac{z_{a / 2}}{(1-\gamma) \hat{\beta}_{1}} K,$$

where
$$K^{2}=V_{00}+2 \widehat{E D 5} 0 V_{10}+\widehat{E D 5} 0^{2} V_{11}-\gamma\left(V_{00}-V_{10}^{2} / V_{11}\right) .$$
The matrix of $V_{(i j)^{5}}$ is obtained from the logistic regression output. This equation can also be extended to the case of general $p$ by replacing $E D 50$ by $E D 100 p$ in the equation.

## STAT4026 COURSE NOTES ：

Since the $\tau_{i}$ are randomly selected, testing individual effects is meaningless and the hypotheses of interest are: $H_{0}: \sigma_{T}^{2}=0$ versus $H_{1} ; \sigma_{T}^{2}>0$. Under $H_{0}$, the test statistic
$$F_{o}=\frac{\mathrm{SS}(\mathrm{TRT}) /(a-1)}{\mathrm{SSE} /(N-a)}=\frac{\mathrm{MS}(\mathrm{TRT})}{\mathrm{MSE}} \sim \mathcal{F}{a-1, N-a}$$ is constructed in exactly the same manner as in the fixed effects case. However, the expected mean squares associated with the random effects model are different and are needed to construct estimators of the variance components. For the model $(8.1)$, it can be shown that: $$E[\mathrm{MS}(\mathrm{TRT})]=\sigma^{2}+n \sigma{T}^{2} \quad E[\mathrm{MSE}]=\sigma^{2}$$

# 时间序列|STAT4025 Time Series代写 Sydney代写

0

Next, let $r$ be the continuously compounded rate of return per annum from time $t$ to $T$. Then we have
$$P_{T}=P_{t} \exp [r(T-t)],$$
where $T$ and $t$ are measured in years. Therefore,
$$r=\frac{1}{T-t} \ln \left(\frac{P_{T}}{P_{t}}\right)$$

we have
$$\ln \left(\frac{P_{T}}{P_{t}}\right) \sim N\left[\left(\mu-\frac{\sigma^{2}}{2}\right)(T-t), \sigma^{2}(T-t)\right] .$$
Consequently, the distribution of the continuously compounded rate of return per annum is
$$r \sim N\left(\mu-\frac{\sigma^{2}}{2}, \frac{\sigma^{2}}{T-t}\right) .$$
The continuously compounded rate of return is, therefore, normally distributed with mean $\mu-\sigma^{2} / 2$ and standard deviation $\sigma / \sqrt{T-t}$.

## STAT4025 COURSE NOTES ：

$$V_{t}=-G_{t}+\frac{\partial G_{t}}{\partial P_{t}} P_{t}$$
The change in $V_{t}$ is then
$$\Delta V_{t}=-\Delta G_{t}+\frac{\partial G_{t}}{\partial P_{t}} \Delta P_{t} .$$
Substituting Eqs. (6.11) and (6.12) into Eq. (6.14), we have
$$\Delta V_{t}=\left(-\frac{\partial G_{t}}{\partial t}-\frac{1}{2} \frac{\partial^{2} G_{t}}{\partial P_{t}^{2}} \sigma^{2} P_{t}^{2}\right) \Delta t .$$

# 统计推断的理论和方法|STAT4023 Theory and Methods of Statistical Inference代写 Sydney代写

0

Recall that for bias estimation, one way to derive the estimator $\hat{\theta}{\mathrm{J}}$ is to use the prediction at $1 / n=1 / \infty=0$ from the line defined by the two points $\left(x{1}, y_{1}\right)=$ $(1 / n, \widehat{\theta})$ and $\left(x_{2}, y_{2}\right)=\left(1 /(n-1), \bar{\theta}{1}\right)$. The related variance estimate procedure is to use the points $\left(x{1}, y_{1}\right)=(1 / n, 0)$ and $\left(x_{2}, y_{2}\right)=\left(1 /(n-1), s_{-1}^{2}\right)$, where we define
$$s_{-1}^{2}=n^{-1} \sum_{i=1}^{n}\left(\widehat{\theta}{[i]}-\bar{\theta}{1}\right)^{2}$$

The intercept from the line defined by these two points is $-\widehat{V}{\mathrm{J}}$, $$-\text { intercept }=\frac{-\left(y{1} x_{2}-y_{2} x_{1}\right)}{x_{2}-x_{1}}=\frac{s_{-1}^{2} / n}{\frac{1}{n-1}-\frac{1}{n}}=\frac{n-1}{n} \sum_{i=1}^{n}\left(\widehat{\theta}{[i]}-\bar{\theta}{1}\right)^{2}=\widehat{V}_{J}$$

## STAT4023 COURSE NOTES ：

The generalization to $k$ samples should be clear: define pseudo-values separately in each sample and then let
$$\widehat{V}{\mathrm{J}}=\sum{j=1}^{k} \frac{s_{j, p s}^{2}}{n_{j}}$$
Arvesen first proposed the above $\widehat{V}{\text {J }}$ for $k=2$, but otherwise it has not been discussed much in the literature. A proper appreciation and general proof requires the $k$-sample approximation by averages $$\widehat{\theta}-\theta{0}=\sum_{i=1}^{k} \frac{1}{n_{i}} \sum_{j=1}^{n_{i}} I C^{(i)}\left(Y_{i j}, \theta_{0}\right)+R$$
where the $i$ th partial influence curve $I C^{(i)}\left(y, \theta_{0}\right)$ is defined by
$$I C^{(i)}\left(y, \theta_{0}\right)=\left.\frac{\partial}{\partial \epsilon} T\left(F_{1}, \ldots, F_{i-1}, \delta_{y}-F_{i}, F_{i+1}, \ldots, F_{k}\right)\right|_{\epsilon=0+}$$

# 线性和混合模型|STAT4022 Linear and Mixed Models代写 Sydney代写

0

and $\lim \sup |C|<\infty$. Furthermore, suppose that is replaced by
$$\left.\Sigma^{-1 / 2} G^{-1} \frac{\partial l}{\partial \psi}\right|{\psi{0}} \longrightarrow N(0, I) \text { in distribution, }$$
where ${\Sigma}$ is a sequence of positive definite matrices such that
$$0<\liminf \lambda_{\min }(\Sigma) \leq \lim \sup \lambda_{\max }(\Sigma)<\infty$$

and $I$ is the $p$-dimensional identity matrix. Then, the asymptotic distribution of $\mathcal{W}$ is $\chi_{q-p}^{2}$

The proofs are given in . According to the proof, one has $G[\hat{\psi}-\psi(\hat{\phi})]=O_{P}(1)$, hence
\begin{aligned} \hat{\mathcal{W}} &=[\hat{\theta}-\theta(\hat{\phi})]^{\prime} G\left[Q_{w}^{-}+o_{P}(1)\right] G[\hat{\theta}-\theta(\hat{\phi})] \ &=\mathcal{W}+o_{P}(1) \end{aligned}
Thus,we conclude the following.

## STAT4022 COURSE NOTES ：

where $b(\cdot), a_{i}(\cdot), c_{i}(\cdot, \cdot)$ are known functions, and $\phi$ is a dispersion parameter which may or may not be known. The quantity $\xi_{i}$ is associated with the conditional mean $\mu_{i}=\mathrm{E}\left(y_{i} \mid \alpha\right)$, which, in turn, is associated with a linear predictor
$$\eta_{i}=x_{i}^{\prime} \beta+z_{\mathrm{i}}^{\prime} \alpha,$$
where $x_{i}$ and $z_{i}$ are known vectors and $\beta$ a vector of unknown parameters (the fixed effects), through a known link function $g(\cdot)$ such that
$$g\left(\mu_{i}\right)=\eta_{i}$$
Furthermore, it is assumed that $\alpha \sim N(0, G)$, where the covariance matrix $G$ may depend on a vector $\theta$ of unknown variance components.

Note that, according to the properties of the exponential family, one has $b^{\prime}\left(\xi_{i}\right)=\mu_{i}$. In particular, under the so-called canonical link, one has
$$\xi_{i}=\eta_{i} ;$$

# 随机过程和应用|STAT4021 Stochastic Processes and Applications代写 Sydney代写

0

$$\hat{\Pi}{T}(\omega)=\hat{\mathbf{H}}{0} \cdot \mathbf{S}{0}+\sum{i=0}^{n} \int_{0}^{T} H_{t}^{(i)} d S_{t}^{(i)} \geq \tilde{\Pi}{T}(\omega) \quad \forall \omega \in \mathcal{F}{T}$$
then, necessarily,
$$\hat{\Pi}{t} \geq \tilde{\Pi}{t} \quad \forall t \in[0, T],$$
and, in particular
$$\hat{\Pi}{0} \geq \tilde{\Pi}{0}$$

Otherwise there exists an arbitrage opportunity by buying the cheaper portfolio and selling the overvalued one. In fact, by this argumentation, the value of $\tilde{\Pi}{0}$ has to be the solution of the constrained optimization problem $$\tilde{\Pi}{0}=\min {\left(H{t}\right){t \in[0, T]}} \hat{\Pi}{0}$$

## STAT4021COURSE NOTES ：

$$d S_{t}^{}=S_{t}^{}\left((\mu-r) d t+\sigma d W_{t}\right),$$
which, by Girsanov’s Theorem $4.31$, shows that
$$W_{t}^{Q}=W_{t}+\frac{\mu-r}{\sigma} t$$
turns into a martingale, namely
$$S_{0}^{}=E_{Q}\left[S_{t}^{}\right],$$
under the equivalent measure $Q$, given by

# 随机分析|MATH4512 Stochastic Analysis代写 Sydney代写

0

the representation
$$\xi=F\left(\xi_{1}, \ldots, \xi_{m}\right)$$
with $F$ as linear combination of the different elements
$$e^{i} \sum_{k=1}^{m} \lambda_{k} \xi_{k}, \quad \xi_{k}=\mu\left(\Delta_{k}\right), \quad k=1, \ldots, m$$
Accordingly, we can see that for all these linear combinations the corresponding is given by the limit

$$\mathcal{D} \xi=\lim {n \rightarrow \infty} \sum \frac{1}{M(\Delta)} E\left(\xi \mu(\Delta) \mid \mathfrak{A}{\Delta \mid}\right) 1_{\Delta}$$
taken in $L_{2}(\Theta \times \mathbb{T} \times \Omega)$. The sum is here taken on the sets $\Delta$ of the $n^{\text {th }}$-series of the partitions of $\Theta \times \mathbb{T}$. . Clearly, this limit defines the linear operator $\mathcal{D}$ :
$$\operatorname{dom} \mathcal{D} \ni \xi \Longrightarrow \mathcal{D} \xi \in L_{2}(\Theta \times \mathbb{T} \times \Omega)$$

## MATH4512 COURSE NOTES ：

Let $I^{P} \varphi_{p}, p=1,2, \ldots$, be the Itô $p$-multiple integrals with respect to a general stochastic measure $\mu$ in the scheme . The anticipating derivative is
$$\mathcal{D} I^{p} \varphi_{p}=I^{p-1} \hat{\varphi}{p}(\cdot, \theta, t), \quad(\theta, t) \in \theta \times \mathbb{T}$$ with the integrands $$\hat{\varphi}{p}:=\sum_{j=1}^{p} \varphi_{p}(\ldots, \theta, t, \ldots)$$
depending on $(\theta, t) \in \Theta \times \mathbb{T}$ as parameter. The couple $(\theta, t)$ comes in at the place of the corresponding couples $\left(\theta_{j}, t_{j}\right), j=1, \ldots, p$. Here we have
$$|\mathcal{D} \xi|_{L_{2}}=p^{1 / 2}|\xi|, \quad p=1,2, \ldots .$$
All the elements
$$\xi=\sum_{p=0}^{\infty} \oplus \xi_{p}: \quad \xi_{0}=E \xi_{p}, \xi_{p}=I^{p} \varphi_{p}, p=0,1, \ldots$$
with
$$\sum_{p=1}^{\infty} p\left|\xi_{p}\right|^{2}<\infty$$

# 连续时间内的套利定价|MATH4511 Arbitrage Pricing in Continuous Time代写 Sydney代写

0

This process is the predictable covariation of $X^{p}$ and $X^{q}$ and is denoted by
$$\left\langle X^{p}, X^{q}\right\rangle_{t}=\sum_{r=1}^{m} \int_{0}^{t} H_{s}^{p r} H_{s}^{q r} d s$$
We note that $\left\langle X^{p}, X^{q}\right\rangle$ is symmetric and bilinear as a function on Itô processes.
Taking
$$Y_{t}=Y_{0}+\int_{0}^{t} K_{s}^{\prime} d s$$
and
$$X_{t}=X_{0}+\int_{0}^{t} K_{s} d s+\sum_{j=1}^{m} H_{s}^{j} d W_{s}^{j}$$
we see $\langle X, Y\rangle_{t}=0$.

Furthermore, considering special cases, formula gives
$$\left\langle\int_{0}^{t} H_{s}^{p i} d W_{s}^{i}, \int_{0}^{t} H_{s}^{q j} d W_{s}^{j}\right\rangle=0 \quad \text { if } \quad i \neq j$$
and
$$\left\langle\int_{0}^{t} H_{s}^{p i} d W_{s}^{i}, \int_{0}^{t} H_{s}^{q i} d W_{s}^{i}\right\rangle=\int_{0}^{t} H_{s}^{p i} H_{s}^{q i} d s .$$

$$E\left|X_{t}^{n+1}-X_{t}^{n}\right|^{2} \leq L^{n} \int_{0}^{t} \frac{(t-s)^{n-1}}{(n-1) !} E\left|X_{s}^{1}-\xi\right|^{2} d s$$
$$E\left|X_{s}^{1}-\xi\right|^{2} \leq \operatorname{LTK}^{2}\left(1+E|\xi|^{2}\right)$$
$$E\left|X_{t}^{n+1}-X_{t}^{n}\right|^{2} \leq C \frac{T^{n}}{n !} .$$