Computational Statistics and Data Analysis 53 (2009) 1974–1992 www.elsevier.com/locate/csda
Efficient importance sampling for ML estimation of SCD models L. Bauwens a,∗ , F. Galli b a CORE and Department of Economics, Universit´e catholique de Louvain, Belgium b Swiss Finance Institute at University of Lugano, Switzerland
Available online 10 March 2008
Abstract The evaluation of the likelihood function of the stochastic conditional duration (SCD) model requires to compute an integral that has the dimension of the sample size. ML estimation based on the efficient importance sampling (EIS) method is developed for computing this integral and compared with QML estimation based on the Kalman filter. Based on Monte Carlo experiments, EIS-ML estimation is found to be more precise statistically, but involves an acceptable loss of quickness of computations. The method is illustrated with real data and is shown to be easily applicable to extensions of the SCD model. c 2008 Elsevier B.V. All rights reserved.
1. Introduction The stochastic conditional duration (SCD) model is a dynamic duration model that was proposed by Bauwens and Veredas (2004) as an alternative to the autoregressive conditional duration model of Engle and Russell (1998). These models are typically applied to analyze the dynamics of the intra-daily trading activity of financial markets. The SCD model uses two unobservable stochastic components for one observable variable (the duration), implying that one of the stochastic terms must be integrated over the whole sample to compute the likelihood function. The fact that the variables to be integrated enter the model nonlinearly leads to the necessity to compute this high-dimensional integral by simulation. The same problem arises for the stochastic volatility (SV) model, since it has the same structure as the SCD model. The solution proposed by Bauwens and Veredas in their original paper circumvented the integration problem since they used quasi-maximum likelihood (QML) estimation based on an approximation of the model by a linear state space representation. This makes it possible to use the Kalman filter to approximate the likelihood function. This method has the advantage of being simple and fast in terms of numerical computation and of providing consistent and asymptotically normal estimators (under suitable regularity conditions), but it is in principle suboptimal in finite samples. To avoid approximations, in the literature on SV models other estimation procedures have been proposed, for instance the generalized method of moments (GMM), the efficient method of moments (EMM), and Bayesian ∗ Corresponding address: CORE, Voie du Roman Pays 34, B-1348 Louvain-La-Neuve, Belgium. Tel.: +32 10474336; fax: +32 10474301.
E-mail address:
[email protected] (L. Bauwens). c 2008 Elsevier B.V. All rights reserved. 0167-9473/$ - see front matter doi:10.1016/j.csda.2008.02.014
L. Bauwens, F. Galli / Computational Statistics and Data Analysis 53 (2009) 1974–1992
1975
estimation based on Markov chain Monte Carlo (MCMC) sampling. For a survey of these procedures see Ghysels et al. (1996). Some of these methods have been extended to the estimation of SCD models. Ning (2004) proposes to estimate SCD models via the empirical characteristic function (ECF) and GMM. Maximum likelihood estimation based on MCMC integration of the latent variables is instead adopted by Feng et al. (2004) to estimate a SCD model in the form proposed by Bauwens and Veredas (2004) and an extended version with a leverage effect determined by the presence of past durations in the mean of the latent variable. MCMC is used also by Strickland et al. (2006) in the context of Bayesian estimation. A relatively new method for computing the integral needed for evaluating the likelihood function of models with latent variables relies on the efficient importance sampling (EIS) procedure, recently developed by Richard and Zhang (2007). This method consists in an extension of the well known importance sampling technique and seems to be particularly well suited for the computation of the multidimensional though relatively well behaved integral needed for evaluation of the SCD likelihood. An application to an extended family of SV models is provided in Liesenfeld and Richard (2003) who illustrate the flexibility of this algorithm. A related but different importance sampling method for computing the multiple integral of SV or SCD models has been proposed by Durbin and Koopman (1997). The purpose of this article is to extend to the original SCD model of Bauwens and Veredas (2004) the EIS method of numerical sampling, to compare it to the QML estimation method on simulated and real data, and to use it in order to analyze some extensions of the original specification proposed by Bauwens and Veredas. Our main finding is that for the order of magnitude of the sample sizes used in practice, i.e. a few thousands, there is a substantial efficiency gain in favor of EIS-ML. The article is organized in the following way. In Section 2 we present the main features of the SCD model which are necessary to the subsequent analysis. In Section 3 we detail the EIS numerical integration method employed for ML estimation of the SCD model. In Section 4 we apply this method to several simulated data sets and compare the sampling distributions of EIS-ML and QML estimates. In Section 5 the EIS-ML technique is applied to the same dataset used in Bauwens and Veredas (2004) and a comparison is drawn. Two examples of parametric extensions of the SCD model which are easily estimated by EIS-ML are provided in Section 6. The last section contains our concluding remarks. 2. SCD models: Main features In this section, we present briefly the SCD model. A more detailed description can be found in Bauwens and Veredas (2004). If we denote by xi the duration between two events that happened at times ti−1 and ti , and assume that the stochastic process {xi } generating the durations is doubly infinite (i goes from −∞ to +∞), the stochastic conditional duration model can be written as xi = eψi i ,
(1)
ψi = ω + βψi−1 + u i ,
(2)
with
where |β| < 1 to ensure the stationarity of the process, and u i ∼ i.i.d. N (0, σ 2 ),
(3)
and i ∼ i.i.d. p(i ),
(4)
with p(.) a distribution with positive support, and u j independent of i , ∀i, j. The moments and autocorrelation function of this process are ω
E(xi ) = µe 1−β
+ 12
σ2 1−β 2
,
σ2
1 + δx2 = (1 + δ 2 ) e 1−β 2 and
(5)
1976
L. Bauwens, F. Galli / Computational Statistics and Data Analysis 53 (2009) 1974–1992 σ 2βk
ρk =
e 1−β 2 − 1
≈
σ2
(1 + δ 2 ) (e 1−β 2
− 1)
σ 2 β k /(1 − β 2 )
≈ βρk−1 ,
σ2
(1 + δ 2 ) (e 1−β 2
− 1)
where µ stands for E(i ), δx for the dispersion index (i.e. the standard deviation to mean ratio) of xi , and δ for the dispersion index of i . The autocorrelation function ρk geometrically decreases at rate β only asymptotically with respect to k, while for small k the decrease rate is smaller. Given a sequence x of n realizations of the process, with density g(x|ψ, θ1 ) indexed by the parameter vector θ1 , conditional on a vector ψ of latent variables of the same dimension as x, and given the density h(ψ|θ2 ) indexed by the parameter θ2 , the likelihood function of x can be written as: Z L(θ ; x) = L(θ1 , θ2 ; x) = g(x|ψ, θ1 )h(ψ|θ2 )dψ. (6) Actually, the integrand in the previous equation is the joint density f (x, ψ|θ ). Given the assumptions we made, it can be sequentially decomposed as f (x, ψ|θ ) =
n Y
d(xi , ψi |xi−1 , ψi−1 , θ ) =
i=1
n Y
p(xi |ψi , θ1 )q(ψi |ψi−1 , θ2 ),
(7)
i=1
where p(xi |ψi , θ1 ) is obtained from p(i ) using the change of variable in (1) (so that θ1 corresponds to the parameters of p(.)), and q(ψi |ψi−1 , θ2 ) is the Gaussian density N (ω + βψi−1 , σ 2 ) (so that θ2 includes ω, β and σ 2 ). Given the functional form usually adopted for p(i ) (Weibull, gamma, generalized gamma, . . . ), the multidimensional integral in (6) cannot be solved analytically and must be computed numerically by simulation. To perform a QML estimation, one can use the following transformation of the model: ln xi = η + ψi + ξi and ψi = ω + βψi−1 + u i , where ξi = ln i − η and η = E(ln i ). This puts the model in state space form with zero mean errors. The ensuing distribution of ξi can be approximated by a Gaussian one, and the Kalman filter applied to compute the approximate likelihood. 3. EIS-based ML estimation Alternative approaches to QML inference for the SCD model can be based on Monte Carlo methods. These methods are widely used in Bayesian inference to evaluate moments of high-dimensional posterior densities when they are not known analytically. One reason for their success is the relative ease with which they can be applied. EIS, in particular, allows a very accurate evaluation of the likelihood function and has been shown to be quite reliable for the estimation of SV models and of latent factor intensity models, see Bauwens and Hautsch (2006). Another advantage of this algorithm is that its basic structure does not depend on a specific model. This renders changes in the distributional assumptions for the underlying random variables rather simple. For a presentation of the foundations of the EIS method, we refer the reader to Richard and Zhang (2007). In this section, we present its implementation in the context of the ML estimation of SCD models. A natural Monte Carlo (MC) estimate of the likelihood function in (7) is given by " # S n X Y 1 ( j) ˜ ; x) = L(θ p(xi |ψ˜ i , θ1 ) , (8) S j=1 i=1 ( j)
( j)
where ψ˜ i denotes a draw from the density q(ψi |ψi−1 , θ2 ). This approach bases itself only on the information provided by the distributional assumptions of the model and does not consider the information that comes from the observed sample. It turns out that this estimator is highly inefficient since its sampling variance rapidly increases with the sample size. In any practical case of a duration data set, where the sample size n lies between 500 and 50 000
L. Bauwens, F. Galli / Computational Statistics and Data Analysis 53 (2009) 1974–1992
1977
observations, the Monte Carlo sampling size S required to give precise enough estimates of L(θ ; x) would be too high to be affordable and it turns out that this estimator cannot be relied on practically. EIS tries to make use of the information provided by the observed data in order to come to a reasonably fast and n reliable numerical approximation. The principle of EIS is to replace the model-based sampler {q(ψi |ψi−1 , θ2 )}i=1 n with an optimal auxiliary parametric importance sampler. Let {m(ψi |ψi−1 , ai )}i=1 be a sequence of auxiliary samplers n . These densities can be defined as a parametric extension of indexed by the set of auxiliary parameter vectors {ai }i=1 n the natural samplers {q(ψi |ψi−1 , θ2 )}i=1 . We rewrite the likelihood function as # Z "Y n n d(xi , ψi |xi−1 , ψi−1 , θ ) Y L(θ ; x) = m(ψi |ψi−1 , ai ) dψ. (9) m(ψi |ψi−1 , ai ) i=1 i=1 Then, its corresponding IS-MC estimator is given by " # ( j) n d(x , ψ S ˜ ( j) (ai−1 ), θ ) Y X 1 i ˜ i (ai )|x i−1 , ψ i−1 ˜ ; x, a) = L(θ , ( j) ( j) S j=1 i=1 m(ψ˜ i (ai )|ψ˜ i−1 (ai−1 ), ai )
(10)
( j) n where {(ψ˜ i (ai ))}i=1 are trajectories drawn from the auxiliary samplers. The optimality criterion for choosing the auxiliary samplers is the minimization of the MC variance of (10). Relying on the factorized expression of the likelihood, the MC variance minimization problem can be decomposed in a sequence of subproblems for each element i of the sequence of observations, provided that the elements depending on the lagged values ψi−1 are transferred back to the (i − 1)th minimization subproblem. More precisely, if we decompose m in the product of a function of ψi and ψi−1 and one of ψi−1 only, such that
m(ψi |ψi−1 , ai ) =
k(ψi , ai ) k(ψi , ai ) =R , χ (ψi−1 , ai ) k(ψi , ai )dψi
(11)
we can set up the following minimization problem: aˆ i (θ ) = arg min ai
S n h o2 i X ( j) ( j) ( j) ( j) ln d(xi , ψ˜ i |ψ˜ i−1 , xi−1 , θ )χ (ψ˜ i , aˆ i+1 ) − ci − ln(k(ψ˜ i , ai )) ,
(12)
j=1
where ci is constant that must be estimated along with ai . If the density kernel k(ψi , ai ) belongs to the exponential family of distributions, the problem becomes linear in ai , and this greatly improves the speed of the algorithm, as a least squares formula can be employed instead of an iterative routine. The estimated aˆ i are then substituted in (10) to obtain the EIS estimate of the likelihood. The EIS algorithm can be ( j) initialized by direct sampling, as in Eq. (8), to obtain a first series of ψ˜i and then iterated to allow the convergence of the sequences of {ai }, which is usually obtained after 3–5 iterations. EIS-ML estimates are finally obtained by ˜ ; x, a) with respect to θ . maximizing L(θ If we adopt a Weibull distribution for i with parameter γ (=θ1 ) and a N (0, σ 2 ) one for u i , we come up with the following expressions: n x γ o γ xi γ −1 i p(xi |ψi−1 , γ ) = ψ exp − ψ (13) ψ i i e e e i and 1 1 q(ψi |ψi−1 , θ2 ) = √ exp − 2 (ψi − ω − βψi−1 )2 . (14) 2σ σ 2π A convenient choice for the auxiliary sampler m(ψi , ai ) is a parametric extension of the natural sampler q(ψi |ψi−1 , θ2 ), in order to obtain a good approximation of the integrand without too heavy a cost in terms of analytical complexity. Following Liesenfeld and Richard (2003), we can start by the following specification of the function k(ψi , ai ): k(ψi , ai ) = q(ψi |ψi−1 , θ2 )ζ (ψi , ai ),
(15)
1978
L. Bauwens, F. Galli / Computational Statistics and Data Analysis 53 (2009) 1974–1992
where ζ (ψi , ai ) = exp{a1,i ψi + a2,i ψi2 } and ai = (a1,i a2,i ). This specification is rather straightforward and has two advantages. Firstly, as q(ψi |ψi−1 , θ2 ) is present in a multiplicative form, it cancels out in the objective function in (12), which becomes a least squares problem with ln ζ (ψi , ai ) that serves to approximate ln p(xi |ψi , θ1 ) + ln χ (ψi , ai ). Secondly, such a functional form for k leads to a distribution of the auxiliary sampler m(ψi , ai ) that remains Gaussian, as stated in the following theorem, whose proof is given in Appendix A. Theorem 1. If the functional forms for q(ψi |ψi−1 , θ2 ) and k(ψi , ai ) are as in Eqs. (14) and (15) respectively, then k(ψi ,ai ) the auxiliary density m(ψi |ψi−1 , ai ) = χ(ψ is Gaussian, with conditional mean and variance given by: i−1 ,ai ) ω + βψi−1 + a µi = vi2 1,i σ2 (16)
and σ2 vi2 = , 1 − 2σ 2 a2,i and the function χ (ψi−1 , ai ) is given by ( 2 ) 1 ω + βψi−1 1 ω + βψi−1 2 σ2 p + a1,i − exp . 2 σ 2(1 − 2σ 2 a2,i ) σ2 1 − 2σ 2 a2,i
(17)
By applying these results, it is possible to compute the likelihood function of the SCD model for a given value of θ, based upon the following steps: ( j) n , as in (8). Step 1. Use the natural sampler q(ψi |ψi−1 , θ2 ) to draw S trajectories of the latent variable {ψ˜ i }i=1 Step 2. The draws obtained in step 1 are used to solve for each i (in the order from n to 1) the least squares problems described in (12), which takes the form of the auxiliary linear regression: !γ xi ( j) ( j) ln γ − γ ψ˜ i + (γ − 1) ln xi − + ln χ (ψ˜ i , aˆ i+1 ) ( j) ψ˜ i e ( j) ( j) (i) 2 = a0,i + a1,i ψ˜ + a2,i (ψ˜ ) + ε , j = 1, . . . , S, i
i
i
( j) where is the error term, a0,i is a constant term, and χ (ψ˜ i , aˆ i+1 ) is set equal to 1 for i = n and defined by (17) for i < n. The reverse ordering from n to 1 is due to the fact that for determining aˆ i , aˆ i+1 is required, see (12). ( j) N from the auxiliary sampler Step 3. Use the estimated auxiliary parameters aˆ i to obtain S trajectories {ψ˜ i (aˆ i )}i=1 m(ψi |ψi−1 , aˆ i ), applying the result of Theorem 1. Step 4. Return to step 2, this time using the draws obtained with the auxiliary sampler. Steps 2, 3 and 4 are usually iterated a small number of times (from 3 to 5), until a reasonable convergence of the parameters aˆ i is obtained. (i) εi
Once the auxiliary trajectories have attained a reasonable degree of convergence, the simulated samples can be plugged in formula (10) to obtain an EIS estimate of the likelihood. This procedure is embedded in a numerical maximization algorithm that converges to a maximum of the likelihood function. After convergence, we compute the standard errors from the Hessian matrix. Throughout the EIS steps described above and their iterations, we employed a single set of simulated random numbers to obtain the draws from the auxiliary sampler. This technique, known as common random numbers, is motivated in Richard and Zhang (2007). The same random numbers were also employed for each of the likelihood evaluations required by the maximization algorithm. The number of draws used (S in Eq. (10)) for all estimations in this article is equal to 50. 4. Simulation results In order to assess the gain in performance allowed by EIS-ML estimation in comparison with QML, we conducted several repeated simulation experiments with different parameter configurations. As the QML estimator should be
1979
L. Bauwens, F. Galli / Computational Statistics and Data Analysis 53 (2009) 1974–1992
Table 1 Sampling means, standard deviations and mean-squared errors of 1000 estimates of the SCD model parameters for simulated series of 250 observations 250 DGP
Mean EIS
ω β σ γ
0.0000 0.9000 0.2000 1.1000
0.0064 0.8262 0.2431 1.1261
ω β σ γ
0.0000 0.9000 0.0500 1.1000
ω β σ γ ω β σ γ
Mean QML
St. Dev. EIS
St. Dev QML
MSE EIS
MSE QML
0.0000 0.7601 0.2771 1.1823
0.0325 0.1482 0.1073 0.0792
0.0649 0.2958 0.1939 0.4732
0.0010 0.0274 0.0133 0.0069
0.0042 0.1071 0.0435 0.2307
0.0083 0.5963 0.1703 1.1415
−0.0332 0.0941 0.3521 1.5280
0.0381 0.2784 0.1384 0.0801
0.1373 0.5847 0.3583 1.2059
0.0015 0.1697 0.0336 0.0081
0.0199 0.9912 0.2197 1.6373
0.0000 0.9000 0.0500 0.8000
0.01471 0.62392 0.18714 0.83921
−0.0729 −0.0906 0.5391 1.3096
0.0534 0.2828 0.1982 0.0779
0.2164 0.5489 0.5122 1.4544
0.0030 0.1562 0.0581 0.0076
0.0521 1.2827 0.5016 2.3752
0.0000 0.9000 0.2000 0.8000
0.0042 0.7804 0.2734 0.8261
−0.0235 0.4938 0.4394 0.9536
0.0482 0.2092 0.1657 0.0634
0.1411 0.5288 0.3681 0.6779
0.0023 0.0580 0.0328 0.0047
0.0204 0.4446 0.1928 0.4831
consistent but inefficient in relatively small samples, trajectories of 250, 500, 1000, 5000 and 10 000 observations from a SCD data generating process (DGP) were simulated 1000 times and the model was estimated both by EIS-ML and by QML. The idea to use as much as 10 000 observations comes from the wish to judge the loss of efficiency of QML relative to EIS-ML. Moreover, such sample sizes are far from unusual for real data sets of durations. The estimations were performed using the MaxSQP maximization function of Ox console 3.40, under Windows XP with a dual core Intel 2.0 Gb processor. The speed of QML estimation with Kalman filter varies from an average of 0.25 s for a series of 250 data to an average of 7.5 s for a 10 000 data one. EIS-ML estimation is much slower, with an average computing time of respectively 2.5 (250 data) and 144 s (10 000 data). This should not come as a surprise and we suspect that alternative estimation strategies, such as Bayesian MCMC, would be even slower than EIS-ML, as the results of Bauwens and Rombouts (2004) for the SV model clearly show. The DGP is defined by Eqs. (1)–(4), plus formula (13). The parameter values used in the simulations of the DGP were the following: • • • •
ω = 0.0, β = 0.9, σ = 0.05 and 0.2, γ = 0.8 and 1.1,
thus leading to four combinations. The starting values for likelihood optimizations were set for all estimations to ω = 0.0, β = 0.85, σ = 0.15 and γ = 1.05, but we checked that other reasonable starting values provided quite similar results to those discussed below. Tables 1–5 contain the means, standard deviations and mean-squared errors of the 1000 estimates for each experiment, and Figs. 1–4 display the corresponding sampling densities (obtained by kernel-based smoothing). As a first remark, it can be noticed that in both estimation methods there is a tendency to underestimate the autoregressive parameter β and to overestimate the parameter σ , especially when the latter takes the low value of 0.05. Anyway, also in these cases the EIS-ML method provides estimates which in mean are closer to the DGP parameters than the QML one. The most striking result concerns the efficiency of the estimators: the EIS-ML estimated standard deviations of the estimates are always remarkably smaller than the QML ones, in particular when the parameter σ is equal to 0.05. The combination of smaller bias and variance is reflected clearly in the mean-squared errors, which are sensibly lower across the board for the EIS-ML method. The better general performance of the EIS-ML estimator can be appreciated also by a visual inspection of the sampling densities.
1980
L. Bauwens, F. Galli / Computational Statistics and Data Analysis 53 (2009) 1974–1992
Table 2 Sampling means, standard deviations and mean-squared errors of 1000 estimates of the SCD model parameters for simulated series of 500 observations 500 DGP
Mean EIS
Mean QML
St. Dev. EIS
St. Dev. QML
MSE EIS
MSE QML
ω β σ γ
0.0000 0.9000 0.2000 1.1000
0.0038 0.8699 0.2186 1.1089
−0.0021 0.8467 0.2357 1.1202
0.0146 0.0770 0.0706 0.0535
0.0255 0.1476 0.1149 0.0778
0.0002 0.0068 0.0053 0.0029
0.0006 0.0246 0.0144 0.0064
ω β σ γ
0.0000 0.9000 0.0500 1.1000
0.0043 0.6200 0.1430 1.1254
−0.0433 0.0981 0.3270 1.4719
0.0255 0.2776 0.1132 0.0562
0.1190 0.5902 0.3392 1.1315
0.0006 0.1554 0.0214 0.0038
0.0160 0.9913 0.1918 1.4187
ω β σ γ
0.0000 0.9000 0.0500 0.8000
0.0068 0.6602 0.1483 0.8224
−0.0701 −0.0944 0.4989 1.2012
0.0368 0.2700 0.1534 0.0584
0.1831 0.5288 0.4563 1.2614
0.0014 0.1303 0.0332 0.0039
0.0384 1.2686 0.4098 1.7521
ω β σ γ
0.0000 0.9000 0.2000 0.8000
0.0035 0.8442 0.2372 0.8109
−0.0221 0.6473 0.3708 0.8863
0.0261 0.1308 0.1119 0.0428
0.0994 0.4308 0.3157 0.4887
0.0006 0.0202 0.0139 0.0019
0.0103 0.2494 0.1289 0.2462
Table 3 Sampling means, standard deviations and mean-squared errors of 1000 estimates of the SCD model parameters for simulated series of 1000 observations 1000 DGP
Mean EIS
ω β σ γ
0.0000 0.9000 0.2000 1.1000
0.0019 0.8859 0.2099 1.1037
ω β σ γ
0.0000 0.9000 0.0500 1.1000
ω β σ γ ω β σ γ
Mean QML
St. Dev. EIS
St. Dev. QML
MSE EIS
MSE QML
0.0001 0.8803 0.2147 1.1098
0.0090 0.0423 0.0428 0.0352
0.0101 0.0626 0.0674 0.0414
0.0000 0.0019 0.0019 0.0012
0.0001 0.0043 0.0047 0.0018
0.0012 0.6883 0.1074 1.1133
−0.0516 0.1722 0.3315 1.4830
0.0166 0.2571 0.0882 0.0365
0.1122 0.5722 0.3381 1.1450
0.0002 0.1109 0.0110 0.0015
0.0152 0.8570 0.1935 1.4579
0.0000 0.9000 0.0500 0.8000
0.0018 0.7376 0.1056 0.8111
−0.0915 −0.0241 0.5402 1.2733
0.0198 0.2367 0.1166 0.0383
0.1822 0.5091 0.4680 1.3596
0.0003 0.0823 0.0167 0.0015
0.0415 1.1131 0.4594 2.0726
0.0000 0.9000 0.2000 0.8000
0.0016 0.8712 0.2183 0.8043
−0.009 0.7818 0.2974 0.8285
0.0187 0.0957 0.0718 0.0300
0.0514 0.2838 0.2360 0.1498
0.0003 0.0100 0.0055 0.0009
0.0027 0.0945 0.0652 0.0232
Looking at the tables and at Figs. 2 and 3 it is easy to remark how poor the performance of the QML estimator is when the parameter σ is small. To better illustrate the issue, Fig. 5 shows for each parameter the graph of the estimated standard deviations against the value of σ in the DGP, for a sample size of 1000. These results are based on additional simulations (for values of σ ranging from 0.02 to 0.7). This figure shows that for small values of σ in the DGP, both estimation methods tend to be imprecise. This is understandable, since as σ tends to 0, the parameter β becomes unidentified, so that the likelihood function becomes flat. We also see in Figs. 2 and 3 that the sampling distribution of the estimates of β has a mode at (or close to) zero. However, this problem is far more pronounced for QML than for EIS-ML.
1981
L. Bauwens, F. Galli / Computational Statistics and Data Analysis 53 (2009) 1974–1992
Table 4 Sampling means, standard deviations and mean-squared errors of 1000 estimates of the SCD model parameters for simulated series of 5000 observations 5000 DGP
Mean EIS
Mean QML
St. Dev. EIS
St. Dev. QML
MSE EIS
MSE QML
ω β σ γ
0.0000 0.9000 0.2000 1.1000
0.0005 0.8982 0.1999 1.0994
0.0002 0.8959 0.2035 1.1026
0.0031 0.0152 0.0175 0.0146
0.0034 0.0201 0.0263 0.0173
0.0000 0.0002 0.0003 0.0002
0.0000 0.0004 0.0007 0.0003
ω β σ γ
0.0000 0.9000 0.0500 1.1000
−0.0002 0.7793 0.0785 1.1048
−0.0302 0.3099 0.2934 1.2761
0.0052 0.2142 0.0522 0.0162
0.0703 0.4937 0.2516 0.7875
0.0000 0.0604 0.0035 0.0002
0.0058 0.5919 0.1225 0.6512
ω β σ γ
0.0000 0.9000 0.0500 0.8000
−0.0003 0.7893 0.0786 0.8033
−0.0767 0.0781 0.5901 1.0754
0.0068 0.2135 0.0755 0.0154
0.1265 0.3672 0.3513 1.0024
0.0000 0.0578 0.0065 0.0002
0.0219 0.8103 0.4152 1.0807
ω β σ γ
0.0000 0.9000 0.2000 0.8000
0.0003 0.8931 0.2024 0.8001
−0.0001 0.8909 0.2086 0.8023
0.0047 0.0380 0.0295 0.0109
0.0069 0.0536 0.0612 0.0158
0.0000 0.0014 0.0008 0.0001
0.0000 0.0029 0.0038 0.0002
Table 5 Sampling means, standard deviations and mean-squared errors of 1000 estimates of the SCD model parameters for simulated series of 10 000 observations 10 000 DGP
Mean EIS
Mean QML
ω β σ γ
0.0000 0.9000 0.2000 1.1000