Kalman filter for general state space models, especially in its naive
version, is known for its numerical instability.  This is the reason
why people developed square root filters, based on Cholesky
decomposition of variance matrices. In package dlm the implementation
of Kalman filter is based on the singular value decomposition (SVD) of
the relevant variance matrices, providing for a more robust algorithm
than the standard square root filter.


The lack of convergence of optimization algorithms in finding MLEs of
unknown parameters is not surprising to me, especially if you have
many parameters. When using state space models it is easy to end up
with a fairly flat, or multimodal likelihood function. These are two
signs of poor identifiability of the model. You can live with it, but
be aware that it is there. My suggestion is to start the optimization
from several different initial values and compare maximized values of
the likelihood. Simulated annealing may be used to better explore the
parameter space. 

HTH,
Giovanni



> Date: Thu, 15 Nov 2007 04:41:26 +0000 (GMT)
> From: [EMAIL PROTECTED]
> Sender: [EMAIL PROTECTED]
> Priority: normal
> Precedence: list
> 
> Hi,
> 
> Following convention below:
> y(t) = Ax(t)+Bu(t)+eps(t) # observation eq
> x(t) = Cx(t-1)+Du(t)+eta(t) # state eq
> 
> I modified the following routine (which I copied from: 
> http://www.stat.pitt.edu/stoffer/tsa2/Rcode/Kall.R) to accommodate u(t), an 
> exogenous input to the system. 
> 
> for (i in 2:N){
>  xp[[i]]=C%*%xf[[i-1]]
>  Pp[[i]]=C%*%Pf[[i-1]]%*%t(C)+Q
>    siginv=A[[i]]%*%Pp[[i]]%*%t(A[[i]])+R
>  sig[[i]]=(t(siginv)+siginv)/2     # make sure sig is symmetric
>    siginv=solve(sig[[i]])          # now siginv is sig[[i]]^{-1}
>  K=Pp[[i]]%*%t(A[[i]])%*%siginv
>  innov[[i]]=as.matrix(yobs[i,])-A[[i]]%*%xp[[i]]
>  xf[[i]]=xp[[i]]+K%*%innov[[i]]
>  Pf[[i]]=Pp[[i]]-K%*%A[[i]]%*%Pp[[i]]
>  like= like + log(det(sig[[i]])) + t(innov[[i]])%*%siginv%*%innov[[i]]
>  }
>    like=0.5*like
>    list(xp=xp,Pp=Pp,xf=xf,Pf=Pf,like=like,innov=innov,sig=sig,Kn=K)
> }
> 
> I tried to fit my problem and observe that I got positive log likelihood 
> mainly because the log of determinant of my variance matrix is largely 
> negative. That's not good because they should be positive. Have anyone 
> experience this kind of instability?
> 
> Also, I realize that I have about 800 sample points. The above routine when 
> being plugged to optim becomes very slow. Could anyone share a faster way to 
> compute kalman filter? 
> 
> And my last problem is, optim with my defined feasible space does not 
> converge. I have about 20 variables that I need to identify using MLE method. 
> Is there any other way that I can try out? I tried most of the methods 
> available in optim already. They do not converge at all...... Thank you.
> 
> - adschai
> 
> ______________________________________________
> R-help@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-help
> PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
> and provide commented, minimal, self-contained, reproducible code.
> 
> 

-- 

Giovanni Petris  <[EMAIL PROTECTED]>
Department of Mathematical Sciences
University of Arkansas - Fayetteville, AR 72701
Ph: (479) 575-6324, 575-8630 (fax)
http://definetti.uark.edu/~gpetris/

______________________________________________
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-help
PLEASE do read the posting guide http://www.R-project.org/posting-guide.html
and provide commented, minimal, self-contained, reproducible code.

Reply via email to