NMusers, My apologies for entering into this discussion a bit late as I was on vacation last week. Rather than rehash previous debates about $COV, I thought I would just list some of the ways I use the $COV step output to assist my model building and clinical trial simulation efforts.
Before I do so, let me preface my comments by saying that for me the real diagnostic value of the $COV step is in the output reported by the $COV step and not simply whether or not $COV runs successfully. Thus, I strive for a successful $COV step because I find diagnostic value in the $COV output to guide my model-building efforts. There are 3 basic ways I use the $COV step output: 1) Inspection of the standard errors, pairwise correlations among the parameter estimates, and the eigenvalue analysis of the correlation matrix helps me to understand the limitations of the design/data via the model. 2) I find building full covariate models much easier to obtain by first ensuring that I have a stable base model through an inspection of the $COV step output. I tend to like to use the full model to make inference about the covariate parameter estimates (e.g., CIs) as they will not suffer from model selection bias which occurs with stepwise procedures. 3) Based on asymptotic statistical theory for maximum likelihood estimation I will often assume that the parameters estimates follow a multivariate normal distribution with mean vector set to the population parameter estimates and covariance matrix set to the covariance matrix of the parameter estimates for THETA, OMEGA and SIGMA reported in the $COV output. This assumption allows me to easily generate random sets of population parameters reflecting parameter uncertainty when conducting clinical trial simulations. Of course one could do non-parametric bootstrapping to accomplish this as well but it is easier and faster to use the multivariate normal distribution when it is reasonable to assume that the asymptotics hold. Below are examples that illustrate some of the ways I use the $COV output: • Identify largest standard errors relative to the point estimates and rationalize the limitations of the data/design that would give rise to these large SEs (e.g., a standard error for ka may be large if few sample times are observed prior to Tmax). • Screen for high pairwise correlations. For example, a high correlation in the population parameter estimates for CL/F and V/F may result when fitting a base model to steady-state PK data. This would suggest that the same information in the data is being used to estimate both parameters. This can be problematic for building full covariate models where one or more covariates may have effects on both parameters. In this setting I may use clinical judgment as to whether a particular covariate effect is more likely to be on CL/F or V/F if the limitations of the design/data preclude estimating it on both. • The covariance matrix of the estimates from a full model run are helpful in determining a subset of potential parsimonious final models using the WAM algorithm (see Kowalski & Hutmacher, JPP 2001;28:253-275). • I use SAS (or Splus) to generate a random set of population parameters from the multivariate normal distribution using the population parameter estimates and the covariance matrix of the estimates from the $COV output in clinical trial simulations so that I can quantify operating characteristics such as probability of success (probability of a Go decision) and probability of a correct decision in contrast to power calculations which assume a fixed effect size. Power is a conditional probability (conditioning on an assumed effect magnitude) whereas POS (prob of success) is an unconditional probability that takes into account the uncertainty in achieving a given effect magnitude. Power is a performance characteristic of the design whereas POS is a performance characteristic of both the design and compound (dose of treatment). Kind regards, Ken -----Original Message----- From: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] On Behalf Of Nick Holford Sent: Wednesday, April 15, 2009 2:49 PM To: nmusers Subject: Re: [NMusers] OMEGA selection Mark, I agree with your logic. In the meantime I will ignore the $COV step (it rarely happens for me) and wait for some empirical evidence that the $COV step is of demonstrable value for model building. Perhaps your grid computing system could take on that challenge by comparing the results of automated model building with and without $COV or convergence? Nick Mark Sale - Next Level Solutions wrote: > > Nick et al. > At this risk of starting an discussion that probably has little > mileage left in it. First I agree with Nick on covariance - it > probably doesn't matter. But, I'd like to point out what may be an > error in our logic. > We content that we have demonstrated that covariance doesn't matter. > Our evidence is that, when bootstrapping, the parameters for the > sample that have successful covariance are not different from those > that failed. So, we conclude that the results are the same regardless > of covariance outcome across sampled data sets - the independent > variable in this test is the data set, the model is fixed. > In model selection/building, we have a fixed data set and the > independent variable is the model structure. Whether covariance > success is a useful predictor across different models with a fixed > data set is a different question than whether covariance is a useful > predictor across data sets with a fixed model. > But, in the end, I do agree that biological plausibility, diagnostic > plots, reasonable parameters and some suggestion of numerical > stability/identifiably (such as bootstrap CIs) are more important than > a successful covariance step. > > Mark > > Mark Sale MD > Next Level Solutions, LLC > www.NextLevelSolns.com <http://www.NextLevelSolns.com> > 919-846-9185 > > -------- Original Message -------- > Subject: Re: [NMusers] OMEGA selection > From: Nick Holford <n.holf...@auckland.ac.nz> > Date: Wed, April 15, 2009 12:17 pm > To: nmusers@globomaxnm.com > > Ethan, > > Do not pay any attention to whether or not the $COV step runs or > even if > the run is 'SUCCESSFUL' to conclude anything about your model. Your > opinion is not supported experimentally e.g. see > http://www.mail-archive.com/nmusers@globomaxnm.com/msg00454.html for > discussion and references. > > NONMEM has no idea if the parameters make sense or not and will > happily > converge with models that are overparameterised. You cannot rely on a > failed $COV step or a MINIMIZATION TERMINATED message to conclude the > model is not a good one. You need to use your brains (NONMEM does not > have a brain) and your common sense to decide if your model makes > sense > or is perhaps overparameterised. > > Nick > > Ethan Wu wrote: > > > > Dear all, > > > > I am fitting a PD response, and the equation goes like this: > > > > total response = baseline+f(placebo response) +f(drug response) > > > > first, I tried full omega block, and model was able to converge, but > > $COV stop failed. > > > > To me, this indicates that too many parameters in the model. The > > structure model is rather simple one, so I think probably too > many Etas. > > > > I wonder is there a good principle of Eta reduction that I could > > implement here. Any good reference? > > > > > > -- > Nick Holford, Dept Pharmacology & Clinical Pharmacology > University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, > New Zealand > n.holf...@auckland.ac.nz tel:+64(9)923-6730 fax:+64(9)373-7090 > mobile: +33 64 271-6369 (Apr 6-Jul 17 2009) > http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford > > -- Nick Holford, Dept Pharmacology & Clinical Pharmacology University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand n.holf...@auckland.ac.nz tel:+64(9)923-6730 fax:+64(9)373-7090 mobile: +33 64 271-6369 (Apr 6-Jul 17 2009) http://www.fmhs.auckland.ac.nz/sms/pharmacology/holford