We illustrate the various methods using two sets of time series:
The general linear gaussian
state-space model can be
written in many different ways. The form considered in JD+ 3.0 is
presented below.
yt = Ztαt + ϵt, ϵt ∼ N(0, σ2Ht), t > 0
αt + 1 = Ttαt + μt, μt ∼ N(0, σ2Vt), t ≥ 0
yt is
the observation at period t, αt is the state
vector. ϵt, μt
are assumed to be serially independent at all leads and lags and
independent from each other.
In the case of multi-variate models, yt is a vector
of observations. However, in most cases, we will use the univariate
approach by considering the observations one by one (univariate handling
of multi-variate models).
The innovations of the state equation will be modelled as
μt = Stξt, ξt ∼ N(0, σ2I)
In other words, Vt = StSt′
The initial ( ≡ t = 0) conditions of the filter are defined as follows:
α0 = a0 + Bδ + μ0, δ ∼ N(0, κI), μ0 ∼ N(0, P*)
where κ is arbitrary large. P* is the variance of the stationary part of the initial state vector and B models the diffuse part. We write BB′ = P∞.
The definition used in JD+ is quasi-identical to that of Durbin and Koopman[1].
In summary, the model is completely defined by the following quantities (possible default values are indicated in brackets):
Zt, Ht[ = 0]
Tt, Vt[ = StSt′], St[ = Cholesky(V)]
a0[ = 0], P*[ = 0], B[ = 0], P∞[ = BB′]
The auto-regressive block is defined by
Φ(B)yt = ϵt
where:
Φ(B) = 1 + φ1B + ⋯ + φpBp
is an auto-regressive polynomial.
Let γi be the autocovariances of the process
Using those notations, the state-space block can be written as follows :
$$ \alpha_t= \begin{pmatrix} y_t \\
y_{t-1} \\ \vdots \\ y_{t-p+1} \end{pmatrix}$$
The state block can be extended with additional lags. That can be useful
in complex (multi-variate) models
$$ T_t = \begin{pmatrix}-\varphi_1 & \cdots & \cdots & -\varphi_p \\ 1 & \cdots & \cdots & 0 \\ \vdots & \ddots & \ddots & \vdots\\ 0 & 0 & 1 & 0 \end{pmatrix}$$
$$ S_t = \sigma_{ar} \begin{pmatrix} 1 \\ 0 \\ \vdots\\ 0 \end{pmatrix} $$
Vt = SS′
$$ Z_t = \begin{pmatrix} 1 & 0 & \cdots & 0\end{pmatrix}$$
$$ \alpha_{-1} = \begin{pmatrix}0 \\ 0 \\ \vdots\\ 0 \end{pmatrix} $$
P* = Ω Ω is the unconditional covariance of the state array; it is computed by means of the auto-covariance function of the model
$$ \Omega_t = \begin{pmatrix}\gamma_0 & \gamma_1 & \cdots & \gamma_p \\ \gamma_1 & \gamma_0 & \gamma_1 & \cdots & \\ \vdots & \ddots & \ddots & \vdots\\ \gamma_p & \cdots & \gamma_1 & \gamma_0 \end{pmatrix}$$
The “ar” block is defined by specifying the coefficients ϕi of the ar polynomial and the innovation variance. More exactly, they correspond to the equation
yt = ϕ1yt − 1 + ϕ2yt − 2 + … + ϕpyt − p + ϵt
The coefficients and/or the variance can be fixed
b_ar<-ar("ar", c(.7,-.4, .2), nlags=5, variance=1)
cat("T\n")
#> T
knit_print(block_t(b_ar))
#> [,1] [,2] [,3] [,4] [,5]
#> [1,] 0.7 -0.4 0.2 0 0
#> [2,] 1.0 0.0 0.0 0 0
#> [3,] 0.0 1.0 0.0 0 0
#> [4,] 0.0 0.0 1.0 0 0
#> [5,] 0.0 0.0 0.0 1 0
cat("\nP0\n")
#>
#> P0
knit_print(block_p0(b_ar))
#> [,1] [,2] [,3] [,4] [,5]
#> [1,] 1.51552795 0.77018634 0.08695652 0.05590062 0.15838509
#> [2,] 0.77018634 1.51552795 0.77018634 0.08695652 0.05590062
#> [3,] 0.08695652 0.77018634 1.51552795 0.77018634 0.08695652
#> [4,] 0.05590062 0.08695652 0.77018634 1.51552795 0.77018634
#> [5,] 0.15838509 0.05590062 0.08695652 0.77018634 1.51552795
An alternative representation of the auto-regressive block will be very useful for the purposes of reflecting expectations. The process is defined as above:
Φ(B)yt = ϵt
where:
Φ(B) = 1 + φ1B + ⋯ + φpBp
is an auto-regressive polynomial. However, modeling data that refers to expectations may require including conditional expectations in the state vector. Thus, the same type of representation that is used for the ARMA model will be considered here.
Let γi be the autocovariances of the model. We also define the size of our state vector as r0 = max(p, h + 1), where h is the forecast horizon desired by the user. If the user needs to use nlags lagged values, whose default value is zero. Then the size of the state vector will be r = r0 + nlags
Using those notations, the state-space model can be written as follows :
$$ \alpha_t= \begin{pmatrix} y_{t-nlags} \\ \vdots \\ y_{t-1} \\ \hline y_{t} \\ y_{t+1|t} \\ \vdots \\ y_{t+h|t} \end{pmatrix}$$
where yt + i|t is the orthogonal projection of yt + i on the subspace generated by y(s) : s ≤ t. Thus, it is the forecast function with respect to the semi-infinite sample. We also have that $y_{t+i|t} = \sum_{j=i}^\infty {\psi_j \epsilon_{t+i-j}}$
$$ T_t = \begin{pmatrix} 0 &1 & 0 & \cdots & 0 \\0& 0 & 1 & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & \cdots & 1\\ -\varphi_r & \cdots & \cdots & \cdots &-\varphi_1 \end{pmatrix}$$
with φj = 0 for j > p
$$ S_t = \sigma_{ar} \begin{pmatrix} 0 \\ \vdots \\ 0\\ \hline 1 \\ \psi_1 \\ \vdots\\ \psi_s \end{pmatrix} $$
Vt = SS′
$$ Z_t = \begin{pmatrix} 0 & \cdots &0 & | & 1 & 0 & \cdots & 0\end{pmatrix}$$
$$ \alpha_{-1} = \begin{pmatrix} 0 \\ \vdots \\ 0\\ \hline 0 \\ 0 \\ \vdots\\ 0 \end{pmatrix} $$
P* = Ω
Ω is the unconditional covariance of the state array; it can be easily derived using the MA representation. We have:
Ω(i, 0) = γi
Ω(i, j) = Ω(i − 1, j − 1) − ψiψj
The arma block is defined by
Φ(B)yt = Θ(B)ϵt
where:
Φ(B) = 1 + φ1B + ⋯ + φpBp
Θ(B) = 1 + θ1B + ⋯ + θqBq
are the auto-regressive and the moving average polynomials.
The MA representation of the process is $y_t=\sum_{i=0}^\infty {\psi_i \epsilon_{t-i}}$. Let γi be the autocovariances of the model. We also define: r = max (p, q + 1), s = r − 1.
Using those notations, the state-space block can be written as follows :
$$ \alpha_t= \begin{pmatrix} y_t \\ y_{t+1|t} \\ \vdots \\ y_{t+s|t} \end{pmatrix}$$
where yt + i|t is the orthogonal projection of yt + i on the subspace generated by y(s) : s ≤ t.Thus, it is the forecast function with respect to the semi-infinite sample. We also have that $y_{t+i|t} = \sum_{j=i}^\infty {\psi_j \epsilon_{t+i-j}}$
$$ T_t = \begin{pmatrix}0 &1 & 0 & \cdots & 0 \\0& 0 & 1 & \cdots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ 0 & 0 & 0 & \cdots & 1\\ -\varphi_r & \cdots & \cdots & \cdots &-\varphi_1 \end{pmatrix}$$
with φj = 0 for j > p
$$ S_t = \begin{pmatrix}1 \\ \psi_1 \\ \vdots\\ \psi_s \end{pmatrix} $$
Vt = SS′
$$ Z_t = \begin{pmatrix} 1 & 0 & \cdots & 0\end{pmatrix}$$
$$ \alpha_{-1} = \begin{pmatrix}0 \\ 0 \\ \vdots\\ 0 \end{pmatrix} $$
P* = Ω
Ω is the unconditional covariance of the state array; it can be easily derived using the MA representation. We have:
Ω(i, 0) = γi
Ω(i, j) = Ω(i − 1, j − 1) − ψiψj
b_arma<-arma("arma", ar=c(-.2, .4, -.1), ma=c(.3, .6))
knit_print(block_t(b_arma))
#> [,1] [,2] [,3]
#> [1,] 0.0 1.0 0.0
#> [2,] 0.0 0.0 1.0
#> [3,] 0.1 -0.4 0.2
knit_print(block_p0(b_arma))
#> [,1] [,2] [,3]
#> [1,] 1.3501359 0.6394319 0.2517752
#> [2,] 0.6394319 0.3501359 0.1394319
#> [3,] 0.2517752 0.1394319 0.1001359