统计推断的理论和方法|STAT4023 Theory and Methods of Statistical Inference代写 Sydney代写

这是一份Sydney悉尼大学STAT4023的成功案例

统计推断的理论和方法|STAT4023 Theory and Methods of Statistical Inference代写 Sydney代写


问题 1.

Recall that for bias estimation, one way to derive the estimator $\hat{\theta}{\mathrm{J}}$ is to use the prediction at $1 / n=1 / \infty=0$ from the line defined by the two points $\left(x{1}, y_{1}\right)=$ $(1 / n, \widehat{\theta})$ and $\left(x_{2}, y_{2}\right)=\left(1 /(n-1), \bar{\theta}{1}\right)$. The related variance estimate procedure is to use the points $\left(x{1}, y_{1}\right)=(1 / n, 0)$ and $\left(x_{2}, y_{2}\right)=\left(1 /(n-1), s_{-1}^{2}\right)$, where we define
$$
s_{-1}^{2}=n^{-1} \sum_{i=1}^{n}\left(\widehat{\theta}{[i]}-\bar{\theta}{1}\right)^{2}
$$

证明 .

The intercept from the line defined by these two points is $-\widehat{V}{\mathrm{J}}$, $$ -\text { intercept }=\frac{-\left(y{1} x_{2}-y_{2} x_{1}\right)}{x_{2}-x_{1}}=\frac{s_{-1}^{2} / n}{\frac{1}{n-1}-\frac{1}{n}}=\frac{n-1}{n} \sum_{i=1}^{n}\left(\widehat{\theta}{[i]}-\bar{\theta}{1}\right)^{2}=\widehat{V}_{J}
$$





英国论文代写Viking Essay为您提供作业代写代考服务

STAT4023 COURSE NOTES :

The generalization to $k$ samples should be clear: define pseudo-values separately in each sample and then let
$$
\widehat{V}{\mathrm{J}}=\sum{j=1}^{k} \frac{s_{j, p s}^{2}}{n_{j}}
$$
Arvesen first proposed the above $\widehat{V}{\text {J }}$ for $k=2$, but otherwise it has not been discussed much in the literature. A proper appreciation and general proof requires the $k$-sample approximation by averages $$ \widehat{\theta}-\theta{0}=\sum_{i=1}^{k} \frac{1}{n_{i}} \sum_{j=1}^{n_{i}} I C^{(i)}\left(Y_{i j}, \theta_{0}\right)+R
$$
where the $i$ th partial influence curve $I C^{(i)}\left(y, \theta_{0}\right)$ is defined by
$$
I C^{(i)}\left(y, \theta_{0}\right)=\left.\frac{\partial}{\partial \epsilon} T\left(F_{1}, \ldots, F_{i-1}, \delta_{y}-F_{i}, F_{i+1}, \ldots, F_{k}\right)\right|_{\epsilon=0+}
$$




















发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注