dear R and stats wizards: I would like to estimate an AR1 model with constant and measurement noise:
true[t] = a + b*true[t-1] + noise1[t] observed[t] = true[t] + noise2[t] (true is never observed.) I am very interested in forecasting observed[t+1]., and modestly interested in inferring b and true[t]. I have a lot of data. in truth, I really have a panel with thousands of individuals, so I don't get the usual strong AR1 bias when b is close to 1. my intuition is that a good forecast of observed[t+1] (and thus of true[t]) is a historical weighted average of past observed[t] values, with more weights on more recent observeds, and in effect shrunk towards the long-run mean. by simulating the model, I can observe how the auto-corrollelogram looks like, and fit it. however, both of these are amateurish---this problem seems so canonical that it has probably been solved a gazillion times. could someone please point me to some simple textbook = howto treatments of this problem and/or R packages that implement this? feel free to point out your own work...this way I can cite it. regards, /iaw ---- Ivo Welch (ivo.we...@gmail.com) [[alternative HTML version deleted]] ______________________________________________ R-help@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.