Hi Jeroen,
I have also seen that adding correlations often got a impressive improvement in
objective function. However, very often when I test that model using
cross-validation the predictive performance is *worse* than the model without
the correlation. I would call that classic over-fitting.
Douglas makes important point in this discussion. That is, the method used
to judge parsimony of the model must consider the performance of the model
for intended purpose.
Consider the parsimony principle: "all things being equal, choose the
simpler model". The key is in how to judge the first par
Hi All,
I agree with everything that Marc and Douglas have pointed out. I too do not
advise building the omega structure based on repeated likelihood ratio tests.
The approach I take is more akin to what Joe had suggested earlier using SAEM
to fit the full block omega structure and then lo
Hi all,
I agree with what Ken and Marc have said. On the point of a full matrix as a
diagnostic, which I think is good, an alternative is to run a nonparametric
estimation ($NONP) after your normal estimation. Even if you did not use a full
block in the original estimation, this step will give
Hi everybody,
Nice discussion! Good to hear that we seem to be in agreement on how to deal
with off-diagonal elements. Thanks for all your feedback!
I would like to underscore Mats comment about the expanded grid option. Also in
my experience it seems to work very well, as an efficient approach
Hi All,
My own anecdotal experiences are consistent with Mats’ comment that a variance
can be biased when a diagonal omega structure is imposed. When fitting a
diagonal omega structure I sometimes find that a particular variance component
may be estimated near zero. However, as soon as you
Dear NM users:
I have a dataset where some of the concentrations are reported as negative
values. I believe that the concentrations were calculated using a standard
curve.
My instinct is to impute all the negative values to zero, but worry that it
will introduce bias.
A 2nd thought is using the
hi Siwei,
you should include the BLOQ data as they are, i.e. negative. Any other
approach would decrease precision (e.g. M3 likelihood-based) and/or induce
bias (e.g. LLOQ/2 or LLOQ=0). I've done some simulations on this a while
ago to show this (
http://page-meeting.org/pdf_assets/2413-PAGE_2010_p
Siwei,
I agree with Ron. Using the measurements you have is better than trying
to use a work around such as likelihood or imputation based methods.
Some negative measurement values are exactly what you should expect if
the true concentration is zero (or 'close' to zero) when there is
backgrou