Dear Douglas and all,
We always have some knowledge about our parameter distribution. It comes from
two sources: prior information and the data, under the model. Prior information
almost always tell us that parameters must be non-normally distributed. That’s
why we enforce different types of
Douglas,
Thanks for your thoughtful and insightful comments on why anyone might
be interested in the answer to the question "Does NONMEM assume a normal
distribution for estimation?".
In fact one has no choice but to use whatever assumptions are built into
the estimation algorithm. So a more
Mats,
This is a helpful and interesting response but I think it is an answer
to a different kind of question. My understanding of the original
question was does NONMEM assume somewhere in its estimation procedure
that some quantity is normally distributed regardless of the (mis)
specification
Hi,
Sorry - I meant to write this in reference to Stuart Beal's suggestion:
"He proposed the term "apparent coefficient of variation" as a way of
**NOT* *implying a normal distribution of ETA."
This is analogous to the term "apparent clearance" which is often used
to refer to a description of
Nick,
Clearly the choice of distributional assumption you make regarding the
parameters have an impact on the estimation (parameters, goodness-of-fit,
predictions and simulations). Simulations showing that is presented in the
Petersson paper and many others. Therefore I don’t know what results
Mats,
I think we need to distinguish the user assumption about the
distribution from the assumptions that NONMEM makes in order to do its
estimation calculations. E.g. I might assume that CL*EXP(ETA_CL) is a
log-normal distribution of clearance -- but that is my assumption by
assuming ETA is