Hi all,
Given their usefulness, maybe we should be trying to use nonparametric
bootstraps more often, at key decision points in model development,
especially now that the ready availability of computing power has made
this realistic. (I guess many of us already do this, but it's a point
worth
Nick,
" In those cases then I think one can make an argument for discarding runs
with parameters that are at this kind of boundary "
A typical user-defined upper boundary is 1 for fractions (bioavailability,
fraction unbound, etc). In a bootstrap some estimates may well reach this
upper boundary
All,
This first part is more to clarify and I do not believe this is in
disagreement with what has been said before. The last paragraph is a
question.
The two examples I mentioned regarding boundary conditions are regarding
variance parameters. The second of these, however, is with regards to a
b
Resending and apologizing for any duplicate messages!
-Original Message-
From: Ribbing, Jakob
Sent: 11 July 2011 10:13
To: nmusers
Subject: RE: [NMusers] Confidence intervals of PsN bootstrap output
All,
This first part is more to clarify and I do not believe this is in
disagreement wit
Hello all,
Sorry to enter the conversation late. (I deleted prior posts to keep from
exceeding the length limit).
I certainly agree with that nonparametric bootstrap procedures need
consideration and interpretation of output. I feel that such procedures
lead to difficulty (as described by many
Hi Nick,
Those "irritating messages that usually just mean the initial estimate
changed a lot or variance was getting close to zero" can be removed if
you use
NOTHETABOUNDTEST NOOMEGABOUNDTEST NOSIGMABOUNDTEST
at estimation record. I think, these options should always for all
bootstrap runs.
Matt,
Thank you for very good comments. One thing though: Your example where
15% of bootstrap samples have negative values of Emax. I certainly agree
that reparameterising to estimate log of Emax is helpful for obtaining a
useful covmatrix (as Emax is highly uncertain and in this example know
not
Hi Jakob,
"The 15% bootstrap samples where data suggest a negative drug effect would
in one case terminate at the zero boundary, in the other case it would
terminate (often unsuccessfully) at highly negative values for log Emax"...
I have seen that transformation can make the likelihood surface m
Leonid
> of them. If some realizations are so special that the model behaves in
> an unusual way (with any definition of unusual: non-convergence, not
> convergence of the covariance step, parameter estimates at the boundary,
> etc.) we either need to accept those as is, or work with each of those
Steve,
If 20% of the runs have not completed successfully (I will assume that
they still gave some parameter estimates), you have a choice of making
one of 2 assumptions:
1. Unsuccesfull/succesful termination is a random process that is
independent of the data set, or at least there is no sys
Hi Matt,
OK, I can certainly see that transformations will be helpful in
bootstrapping; for those persons that throw away samples with
unsuccessful termination or cov step. They would otherwise discard all
bootstrap estimates that indicate Emax is close to zero. Since I most
often use all bootstra
11 matches
Mail list logo