Hi all,

I agree with what Ken and Marc have said. On the point of a full matrix as a 
diagnostic, which I think is good, an alternative is to run a nonparametric 
estimation ($NONP) after your normal estimation. Even if you did not use a full 
block in the original estimation, this step will give you one (and it will 
“never” have estimation problems). It is not entirely unproblematic to use as 
is, because sometimes a variance can be biased due to an imposed diagonal 
structure in the preceding parametric step, but will often result in 
informative results for how to formulate an appropriate correlation structure. 
If you are ambitious, you can use the extended grid option which I think is 
recently implemented and addresses this problem.

I haven’t had the experience of Douglas that adding additional off-diagonal 
elements makes the simulation properties of a model worse. The nonparametric 
option does allow a fuller description of the correlation than the linear one 
though, so if that was the problem, $NONP may offer a solution.

Best regards,
Mats


Mats Karlsson, PhD
Professor of Pharmacometrics

Dept of Pharmaceutical Biosciences
Faculty of Pharmacy
Uppsala University
Box 591
75124 Uppsala

Phone: +46 18 4714105
Fax + 46 18 4714003
www.farmbio.uu.se/research/researchgroups/pharmacometrics/<http://www.farmbio.uu.se/research/researchgroups/pharmacometrics/>

Från: owner-nmus...@globomaxnm.com [mailto:owner-nmus...@globomaxnm.com] För 
Ken Kowalski
Skickat: den 2 oktober 2014 17:10
Till: ma...@metruminst.org; 'Eleveld, DJ'; nmusers@globomaxnm.com; 
non...@optonline.net; joseph.stand...@nhs.net; 'Jeroen Elassaiss-Schaap'
Ämne: RE: [NMusers] OMEGA matrix

Hi All,

I agree with everything that Marc and Douglas have pointed out.  I too do not 
advise building the omega structure based on repeated likelihood ratio tests.  
The approach I take is more akin to what Joe had suggested earlier using SAEM 
to fit the full block omega structure and then look for patterns in the 
estimated omega matrix.  Even with FOCE estimation I will often fit a full 
block omega structure just to look for such patterns.  The full block omega 
structure may be over-parameterized and sometimes may not even converge.  
Nevertheless, as a diagnostic run it can be useful for uncovering patterns that 
may lead to reduced omega structures with more stable model fits (i.e., not 
over-parameterized).  I’m not necessarily driven to find a parsimonious omega 
structure as I’ll certainly err on the side of including additional elements in 
omega provided there is sufficient support to estimate these parameters (i.e., 
a stable model fit).   For example, I will select a full omega structure 
regardless of the magnitude of the correlations if the model is stable and not 
over-parameterized.  I have no issue with those who want to identify a 
parsimonious omega structure, however, I still maintain that a diagonal omega 
structure often is not the most parsimonious.

I also agree with Marc’s comment that we must judge parsimony relative to the 
intended purpose of the model.  If we are only interested in our model to 
predict central tendency, then a diagonal omega structure may be all that is 
needed.  I would contend, however, that we often want to use our models for 
more than just predicting central tendency.  If we perform VPCs, 
cross-validation, or external validations on independent datasets,  but the 
statistics we summarize to assess predictive performance are only those 
involving central tendency then we’re not really going to get a robust 
assessment of the omega structure.  To evaluate the omega structure we need to 
use VPC statistics that describe variation and other percentiles besides the 
median.  My impression is that we aren’t as rigorous in our assessments of 
whether our models can adequately describe the variation in our data.  As I 
stated earlier, I see so many standard VPC plots where virtually 100% of the 
observed data are contained well within the 5th and 95th percentiles.  The 
presenter will often claim that these VPC plots support the adequacy of the 
predictions but clearly the model is over-predicting the variation.  The 
over-prediction of the variation may or may not be related to the omega 
structure as it could also be related to skewed or non-normal random effect 
distributions.   However, if  a diagonal omega structure was used and I saw 
this over-prediction in the variation in a VPC plot, one of the first things I 
would do is re-evaluate the omega structure and see if an alternative omega 
structure can lead to  improvements in predicting these percentiles.

Best,

Ken

From: Gastonguay, Marc [mailto:ma...@metruminstitute.org]
Sent: Thursday, October 02, 2014 7:03 AM
To: Eleveld, DJ; nmusers@globomaxnm.com<mailto:nmusers@globomaxnm.com>; 
ken.kowal...@a2pg.com<mailto:ken.kowal...@a2pg.com>; 
non...@optonline.net<mailto:non...@optonline.net>; 
joseph.stand...@nhs.net<mailto:joseph.stand...@nhs.net>; Jeroen Elassaiss-Schaap
Subject: Re: [NMusers] OMEGA matrix

Douglas makes important point in this discussion. That is, the method used to 
judge parsimony of the model must consider the performance of the model for 
intended purpose.

Consider the parsimony principle: "all things being equal, choose the simpler 
model". The key is in how to judge the first part of that statement.

A model developed based on goodness of fit metrics such as AIC, BIC, or 
repeated likelihood ratio tests, may be the most parsimonious model for 
predicting the current data set. This doesn't ensure that the model will be 
"equal" in performance to more complex models for the purpose of predicting the 
typical value in an external data set - external cross validation might be 
required for that conclusion. Further, if the purpose is to develop a model 
that is a reliable stochastic simulation tool, a simulation-based model 
checking method should be part of the assessment of "equal" performance when 
arriving at a parsimonious model.

Since most of our modeling goals go far beyond prediction of the current data 
set, it's necessary to move beyond metrics solely based on objective function 
and degrees of freedom when selecting a model. In other words, it may be 
perfectly fine (and even parsimonious) for a model to include more parameters 
than the likelihood ratio test tells you to, if those parameters improve 
performance for the intended purpose.

Best regards,
Marc


Reply via email to