Note that this allows dependent variables to be influenced not only by independent variables, as in regression and linear models in general, but also by other dependent variables. Let us denote
$\boldsymbol{B}=\left(\begin{array}{cc}\boldsymbol{L} & \mathbf{0} \ \mathbf{0} & \mathbf{0}\end{array}\right), \boldsymbol{\Gamma}=\left(\begin{array}{c}\boldsymbol{M} \ \boldsymbol{I}\end{array}\right)$, and $\boldsymbol{v}=\left(\begin{array}{c}\boldsymbol{\eta} \ \xi\end{array}\right)$
, where $I$ is the identity matrix of an appropriate order. Then (1) can be expressed in an alternative form
$$
v=B v+\Gamma \xi
$$
Now assume that $\boldsymbol{I}-\boldsymbol{B}$ is non-singular so that the inverse of $\boldsymbol{I}-\boldsymbol{B}$ exists, then
$$
\boldsymbol{v}=(\boldsymbol{I}-\boldsymbol{B})^{-1} \boldsymbol{\Gamma} \xi
$$
$$
H_{0}: \mathbf{c}(\boldsymbol{\beta})=\mathbf{0},
$$
the likelihood-ratio statistic is the obvious choice. This requires estimation of $\beta$ subject to the restrictions of the null hypothesis, for example, subject to the exclusions of a null hypothesis that states that certain variables should have zero coefficients – that is, that they should not appear in the model. Then, the likelihood-ratio statistic is
$$
\chi^{2}[J]=2\left(\log L-\log L_{0}\right)
$$
STAT0014 COURSE NOTES :
Information-criterion-based fit indices: The goodness of fit of several different models can be compared with the information criteria AIC (e.g., Akaike, 1974,1987 ), CAIC (Bozdogan, 1987), and BIC (Schwarz, 1978), defined as follows:
$$
\begin{aligned}
&\mathrm{AIC}=T_{\mathrm{ML}}+2 q \
&\mathrm{CAIC}=T_{\mathrm{ML}}+(1+\log n) q \
&\mathrm{BIC}=T_{\mathrm{ML}}+(\log n) q
\end{aligned}
$$
respectively, where $q$ is the number of parameters (either $q_{1}$ or $q_{2}$ depending on the model). Assuming that the model makes sense theoretically, the model with the smallest information criterion may be chosen.