On Thu, 15 Nov 2007, [EMAIL PROTECTED] wrote: > Hi, > > Following convention below: > y(t) = Ax(t)+Bu(t)+eps(t) # observation eq > x(t) = Cx(t-1)+Du(t)+eta(t) # state eq > > I modified the following routine (which I copied from: > http://www.stat.pitt.edu/stoffer/tsa2/Rcode/Kall.R) to accommodate u(t), an > exogenous input to the system. > > for (i in 2:N){ > xp[[i]]=C%*%xf[[i-1]] > Pp[[i]]=C%*%Pf[[i-1]]%*%t(C)+Q > siginv=A[[i]]%*%Pp[[i]]%*%t(A[[i]])+R > sig[[i]]=(t(siginv)+siginv)/2 # make sure sig is symmetric > siginv=solve(sig[[i]]) # now siginv is sig[[i]]^{-1} > K=Pp[[i]]%*%t(A[[i]])%*%siginv > innov[[i]]=as.matrix(yobs[i,])-A[[i]]%*%xp[[i]] > xf[[i]]=xp[[i]]+K%*%innov[[i]] > Pf[[i]]=Pp[[i]]-K%*%A[[i]]%*%Pp[[i]] > like= like + log(det(sig[[i]])) + t(innov[[i]])%*%siginv%*%innov[[i]] > } > like=0.5*like > list(xp=xp,Pp=Pp,xf=xf,Pf=Pf,like=like,innov=innov,sig=sig,Kn=K) > } > > I tried to fit my problem and observe that I got positive log likelihood > mainly because the log of determinant of my variance matrix is largely > negative. That's not good because they should be positive. Have anyone > experience this kind of instability?
Why are you expecting that? The magnitude of the log-likelihood depends on the scale of your data, and hence so does the sign of its log. > Also, I realize that I have about 800 sample points. The above routine > when being plugged to optim becomes very slow. Could anyone share a > faster way to compute kalman filter? Try > help.search("kalman", agrep=FALSE) kalsmo.car(cts) Compute Components with the Kalman Smoother kalsmo.comp(cts) Estimate Componenents with the Kalman Smoother nbkal(repeated) Negative Binomial Models with Kalman Update extended(sspir) Iterated Extended Kalman Smoothing kfilter(sspir) Kalman filter for Gaussian state space model kfs(sspir) (Iterated extended) Kalman smoother smoother(sspir) Kalman smoother for Gaussian state space model tsmooth(timsac) Kalman Filter KalmanLike(stats) Kalman Filtering > And my last problem is, optim with my defined feasible space does not > converge. I have about 20 variables that I need to identify using MLE > method. Is there any other way that I can try out? I tried most of the > methods available in optim already. They do not converge at all...... Most likely because of errors in your code. Optim solves quite large Kalman filter problems for arima(). -- Brian D. Ripley, [EMAIL PROTECTED] Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UK Fax: +44 1865 272595 ______________________________________________ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.