# 概率和随机过程|MATH3801 Probability and Stochastic Processes代写 unsw代写

0

This course is an introduction to the theory of stochastic processes. Informally, a stochastic process is a random quantity that evolves over time, like a gambler’s net fortune and the price fluctuations of a stock on any stock exchange, for instance. The main aims of this course are: 1) to provide a thorough account of basic probability theory: 2) to introduce the ideas and tools of the theory of stochastic processes; and 3) to discuss in depth important classes of stochastic processes, including Markov Chains (both in discrete and continuous time), Poisson processes, the Brownian motion and Martingales. The course will also cover other important but less routine topies, like Markov decision processes and some elements of queucing theory.

Suppose the claim is wrong and that
$$\mathrm{P}\left[A_{\infty}=\infty, \sup {n}\left|X{n}\right|<\infty\right]>0 .$$
Then,
$$\mathrm{P}\left[T(c)=\infty ; A_{\infty}=\infty\right]>0$$
where $T(c)$ is the stopping time
$$T(c)=\inf \left{n|| X_{n} \mid>c\right} .$$

Now
$$\mathrm{E}\left[X_{T(c) \wedge n}^{2}-A_{T(c) \wedge n}\right]=0$$
and $X^{T(c)}$ is bounded by $c+K$. Thus
$$\mathrm{E}\left[A_{T(c) \wedge n}\right] \leq(c+K)^{2}$$
for all $n$. This is a contradiction to $\mathrm{P}\left[A_{\infty}=\infty, \sup {n}\left|X{n}\right|<\infty\right]>0$.

## MATH3801 COURSE NOTES ：

it is seen that the rate into the box around Node 0 is $\mu p_{1}$; the rate out of the box around Node 0 is $\lambda p_{0}$; thus, “rate in” = “rate out” yields
$$\mu p_{1}=\lambda p_{0}$$
The rate into the box around Node 1 is $\lambda p_{0}+\mu p_{2}$; the rate out of the box around Node 1 is $(\mu+\lambda) p_{1}$; thus
$$\lambda p_{0}+\mu p_{2}=(\mu+\lambda) p_{1}$$
Continuing in a similar fashion and rearranging, we obtain the system
\begin{aligned} p_{1} &=\frac{\lambda}{\mu} p_{0} \text { and } \ p_{n+1} &=\frac{\lambda+\mu}{\mu} p_{n}-\frac{\lambda}{\mu} p_{n-1} \text { for } n=1,2, \cdots . \end{aligned}

# 概率和随机过程 Probability and Stochastic Processes STAT4061

0

Let $Z$ have the standard normal distribution, Let $V$ be 0 or 1 with probabilities $1-p, p$ independent of $Z$. Then
$$\varepsilon=\left[\sigma_{1}(1-V)+\sigma_{2} V\right] Z= \begin{cases}\sigma_{1} Z, & \text { if } \quad V=0 \ \sigma_{2} Z, & \text { if } \quad V=1\end{cases}$$
has the “contaminated normal distribution,” with c.d.f.
$$F(x)=(1-p) \Phi\left(x / \sigma_{1}\right)+p \Phi\left(x / \sigma_{2}\right)$$
$F$ has mean $E(\varepsilon)=0$, variance $\operatorname{Var}(\varepsilon)=(1-p) \sigma_{1}^{2}+p \sigma_{2}^{2}$. The density of $E$ is plotted.

Now consider simple linear regression $Y_{i}=\beta_{0}+\beta_{i} x_{i}+\varepsilon_{i}$ for $i=1, \ldots, n$, where the $\varepsilon_{i}$ ‘s are a random sample from $F$ above. Take $n=10, x_{i}=i$ for $i=1, \ldots, 9$ and $x_{10}=30$. Take $\sigma_{1}=2, \sigma_{2}=20, p=0.2$.

Thus, $d_{10,10}$ as defined in Eicher’s Theorem is $\left(\frac{1}{10}\right)+\frac{(30-7.5)^{2}}{622.5}=0.913$,

## STAT4061 COURSE NOTES ：

$$e_{i}=e_{i}\left(b_{0}, b_{1}\right)=y_{i}-\left(b_{0}+b_{1} x_{i}\right) \quad \text { and } \quad Q\left(b_{0}, b_{1}\right)=\sum \rho\left(e_{i}\left(b_{0}, b_{1}\right)\right)$$
Let
$$Q^{0}\left(b_{0}, b_{1}\right)=\frac{\partial}{\partial b_{0}} Q\left(b_{0}, b_{1}\right)=\sum \psi\left(e_{i}\right)$$
and
$$Q^{1}\left(b_{0}, b_{1}\right)=\frac{\partial}{\partial b_{1}} Q\left(b_{0}, b_{1}\right)=\sum \psi\left(e_{i}\right) x_{i}$$